Compare commits

..

81 Commits

Author SHA1 Message Date
buildmaster
2df0377acb Update SNAPSHOT to 3.0.0.M3 2019-08-13 07:57:57 +00:00
Oleg Zhurakousky
164948ad33 Bumped spring-kafka and si-kafka snapshots to milestones 2019-08-13 09:43:09 +02:00
Oleg Zhurakousky
a472845cb4 Adjusted docs with recent s-c-build updates 2019-08-13 09:29:03 +02:00
Oleg Zhurakousky
12db5fc20e Ignored failing tests 2019-08-13 09:24:30 +02:00
Soby Chacko
46f1b41832 Interop between bootstrap server configuration
When using the Kafka Streams binder, if the application chooses to
provide bootstrap server configuration through Kafka binder broker
property, then allow that. This way either type of broker config
works in Kafka Streams binder.

Resolves #401
Resolves #720
2019-08-13 08:45:05 +02:00
Soby Chacko
64ca773a4f Kafka Streams application id changes
Generate a random application id for Kafka Streams binder if the user
doesn't set one for the application. This is useful for development
purposes, as it avoids the creation of an explicit application id.
For production workloads, it is highly recommended to explicitly provide
and application id.

The gnerated application id follows a patter where it uses the function bean
name followed by a random UUID string which is followed by the literal appplicaitonId.
In the case of StreamListener, instead of function bean name, it uses the containing class +
StreamListener method name.

If the binder generates the application id, that information will be logged on the console
at startup.

Resolves #718
Resolves #719
2019-08-13 08:43:07 +02:00
Soby Chacko
a92149f121 Adding BiConsumer support for Kafka Streams binder
Resolves #716
Resolves #717
2019-08-13 08:31:50 +02:00
Gary Russell
2721fe4d05 GH-713: Add test for e2e transactional binder
See https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/713

The problem was already fixed in SIK.

Resolves #714
2019-08-13 08:29:09 +02:00
Soby Chacko
8907693ffb Revamping Kafka Streams docs for functional style
Resolves #703
Resolves #721
Addressing PR review comments

Addressing PR review comments
2019-08-13 08:17:48 +02:00
Oleg Zhurakousky
ca0bfc028f minor cleanup 2019-08-08 20:59:05 +02:00
Soby Chacko
688f05cbc9 Joining two input KStreams
When joining two input KStreams, the binder throws an exceptin.
Fixing this issue by passing the wrapped target KStream from the
proxy object to the adapted StreamListener method.

Resolves #701
2019-08-08 14:19:21 -04:00
Oleg Zhurakousky
acb4180ea3 GH-1767-core changes related to the disconnection of @StreamMessageConverter 2019-08-06 19:14:49 +02:00
Soby Chacko
77e4087871 Handle Deserialization errors when there is a key (#711)
* Handle Deserialization errors when there is a key

Handle DLQ sending on deserialization errors when there is a key in the record.

Resolves #635

* Adding keySerde information in the functional bindings
2019-08-05 13:58:48 -04:00
Gary Russell
0d339e77b8 GH-683: Update docs 2019-08-02 09:30:36 -04:00
Gary Russell
799eb5be28 GH-683: MessageKey header from payload property
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/683

If the `messageKeyExpression` references the payload or entire message, add an
interceptor to evaluate the expression before the payload is converted.

The interceptor is not needed when native encoding is in use because the payload
will be unchanged when it reaches the adapter.
2019-08-01 17:00:05 -04:00
Soby Chacko
2c7615db1f Temporarily ignore a test 2019-08-01 14:32:39 -04:00
Gary Russell
b2b1438ad7 Fix resetOffsets for manual partition assignment 2019-07-19 18:47:41 -04:00
Gary Russell
87399fe904 Updates for latest S-K snapshot
- change `TopicPartitionInitialOffset` to `TopicPartitionOffset`
- when resetting with manual assignment, set the SeekPosition earlier rather than
  relying on direct updat of the `ContainerProperties` field
- set `missingTopicsFatal` to `false` in mock test to avoid log messages about missing bootstrap servers
2019-07-17 18:39:53 -04:00
Soby Chacko
f03e3e17c6 Provide a CollectionSerde implementation
Resolves #702
2019-07-11 17:08:25 -04:00
Gary Russell
48aae76ec9 GH-694: Add recordMetadataChannel (send confirm.)
Resolves #694

Also fix deprecation of `ChannelInterceptorAware`.
2019-07-10 10:48:08 -04:00
Soby Chacko
6976bcd3a8 Docs polishing 2019-07-09 15:59:23 -04:00
Gary Russell
912ab14309 GH-689: Support producer-only transactions
Resolves #689
2019-07-09 15:59:23 -04:00
Oleg Zhurakousky
b4fcb791c3 General polishing and clean up after review
Resolves #692
2019-07-09 20:46:01 +02:00
Soby Chacko
ca3f20ff26 Default multiple input/output binding names in the functional support will be separated usiung underscores (_).
For e.g. process_in, process_in_0, process_out_0 etc.

Adding cleanup config to ktable integration tests.
2019-07-08 19:15:02 -04:00
Soby Chacko
97ecf48867 BiFunction support in Kafka Streams binder
* Introduce the ability to provide BiFunction beans as Kafka Streams processors.
* Bypass Spring Cloud Function catalogue for bean lookup by directly querying the bean factory.
* Add test to verify BiFunction behavior.

Resolves #690
2019-07-08 15:01:07 -04:00
Soby Chacko
4d0b62f839 Functional enhancements in Kafka Streams binder
This PR introduces some fundamental changes in the way functional model of Kafka Streams applications
are supported in the binder. For the most part, this PR is comprised of the following changes.

* Remove the usage of EnableBinding in Kafka Streams based Spring Cloud Stream applications where
  a bean of type java.util.function.Function or java.util.function.Consumer is provides (instead of
  a StreamListener). For StreamListener based Kafka Streams applications, EnableBinding is still
  necessary and required.

* Target types (KStream, KTable, GlobalKTable) will be inferred and bound by the binder through
  a new binder specific bindable proxy factory.

* By deault input bindings are named as <functionName>-input for functions with single input
  and <functionName>-input-0...<functionName>-input-n for functions with n-1 number of inputs.

* By deault output bindings are named as <functionName>-output for functions with single output
  and <functionName>-output-0...<functionName>-output-n for functions with n-1 number of outputs.

* If applications prefer custom input binding names, the defaults can be overridden through
  spring.cloud.stream.function.inputBindings.<funcationName>. Similarly if custom outpub binding names
  are needed, then that can be done through spring.cloud.streamfunction.outputBindings.<functionName>.

* Test changes

* Refactoring and polishing

Resolves #688
2019-07-08 15:01:07 -04:00
Oleg Zhurakousky
592b9a02d8 Binder changes related to GH-1729
addressed PR comment

Resolves #670
2019-07-08 17:32:52 +02:00
Soby Chacko
4ca58bea06 Polishing the previous upgrade commit
* Upgrade Kafaka versions used in tests.
 * EmbeddedKafkaRule changes in tests.
 * KafaTransactionTests changes (createProducer expectations in mockito).
 * Kafka Streams bootstrap server now return an ArrayList instead of String.
   Making the necessary changes in the binder to accommodate this.
 * Kafka Streams state store tests - retention window changes.
2019-07-06 14:24:09 -04:00
Jeff Maxwell
108ccbc572 Update kafka.version, spring-kafka.version
Update kafka.version to 2.3.0 and spring-kafka.version to 2.3.0.BUILD-SNAPSHOT

Resolves #696
2019-07-06 14:24:09 -04:00
buildmaster
303679b9f9 Going back to snapshots 2019-07-03 14:56:50 +00:00
buildmaster
35434c680b Update SNAPSHOT to 3.0.0.M2 2019-07-03 14:56:00 +00:00
Oleg Zhurakousky
e382604b04 Change s-i-kafka to 3.2.0.M3 2019-07-03 16:38:05 +02:00
Soby Chacko
bfe9529a51 Topic provisioning - Kafka streams table types
* Topic provisioning for consumers ignores topic.properties settings if binding type is KTable or GlobalKTable
* Modify tests to verify the behavior during KTable/GlobalKTable bindings
* Polishing

Resolves #687
2019-07-02 13:17:25 -04:00
Gary Russell
d8df388d9f GH-423 Add option to use KafkaHeaders.TOPIC Header
Resolves https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/423

Add an option to override the default binding destination with the value
of the header, if present.
2019-06-18 16:28:38 -04:00
Gary Russell
569344afa6 GH-677: Fix resetOffsets with concurrency
Fixes https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/677

The logic for resetting offsets only on the initial assignment used a simple
boolean; this is insufficient when concurrency is > 1.

Use a concurrent set instead to determine whether or not a particular topic/partition
has been sought.

Also, change the `initial` argument on `KafkaBindingRebalanceListener.onPartitionsAssigned()`
to be derived from a `ThreadLocal` and add javadocs about retaining the state by partition.

**backport to all supported versions** (Except `KafkaBindingRebalanceListener` which did
not exist before 2.1.x)

polishing
2019-06-18 16:13:56 -04:00
Oleg Zhurakousky
c1e58d187a Polishing KafkaStreamsBinderUtils
Added registerBean(..) instead of registerSingleton

Resolves #675
2019-06-18 16:21:49 +02:00
Soby Chacko
12afaf1144 Update spring-cloud-build to 2.2.0 snapshot
Update Spring Integration Kafka to 3.2.0 snapshot
2019-06-17 18:43:49 -04:00
Soby Chacko
bbb455946e Refactor common code in Kafka Streams binder.
Both StreamListener and the functional model require many code paths
that are common. Refactoring them for better use and readability.

Resolves #674
2019-06-17 18:11:11 -04:00
Soby Chacko
5dac51df62 Infer SerDes used on input/output when possible
When using native de/serializaion (which is now the default), the binder should infer the common Serde's
used on input and output. The Serdes inferred are - Integer, Long, Short, Double, Float, String, byte[] and
Spring Kafka provided JsonSerde.

Resolves #368

Address PR review comments

Addressing PR review comments

Resolves #672
2019-06-17 14:59:13 +02:00
Oleg Zhurakousky
c1db3d950c Added author tag 2019-06-13 20:19:37 +02:00
Vladislav Fefelov
4d59670096 Use single Executor Service in Kafka Binder health indicator
Resolves #665
2019-06-13 20:19:37 +02:00
Oleg Zhurakousky
d213b4ff2c Merge pull request #669 from sobychacko/gh-651
Use native Serde for Kafka Streams binder as the default
2019-06-12 18:35:03 +02:00
buildmaster
66198e345a Going back to snapshots 2019-06-11 12:07:50 +00:00
buildmaster
df8d151878 Update SNAPSHOT to 3.0.0.M1 2019-06-11 12:07:03 +00:00
Oleg Zhurakousky
d96dc8361b Bumped s-c-build to 2.2.0.M2 2019-06-11 13:47:29 +02:00
Soby Chacko
9c88b4d808 Use native Serde for Kafka Streams binder
In Kafka Streams binder, use the native Serde mechanism as the default instead of the framework
provided message conversion on the input and output bindings. This is to align applications
written using Kafka Streams binder more compatible with native concepts and mechanisms.
Users can still disable native encoding and decoding through configuration.

Amend tests to accommodate the flipped configuration.

Resolves #651
2019-06-10 21:17:38 -04:00
Oleg Zhurakousky
94be206651 Updated POMs for 3.0.0.M1 release 2019-06-10 14:49:49 +02:00
iguissouma
4ab6432f23 Use try with resources when creating AdminClient
Use try with resources when creating AdminClient to release all associated resources.
Fixes gh-660
2019-06-04 09:37:35 -04:00
Walliee
24b52809ed Switch back to use DefaultKafkaHeaderMapper
* update spring-kafka to v2.2.5
* use DefaultKafkaHeaderMapper instead of BinderHeaderMapper
* delete BinderHeaderMapper

resolves #652
2019-05-31 22:15:12 -04:00
Soby Chacko
7450d0731d Fixing ignored tests from upgrade 2019-05-28 15:02:06 -04:00
Oleg Zhurakousky
08b41f7396 Upgraded master to 3.0.0 2019-05-28 16:42:05 +02:00
Oleg Zhurakousky
e725a172ba Bumping version to 2.3.0-B-S for master 2019-05-20 06:33:35 -05:00
buildmaster
b7a3511375 Bumping versions to 2.2.1.BUILD-SNAPSHOT after release 2019-05-07 12:52:13 +00:00
buildmaster
d6c06286cd Going back to snapshots 2019-05-07 12:52:13 +00:00
buildmaster
321331103b Update SNAPSHOT to 2.2.0.RELEASE 2019-05-07 12:50:54 +00:00
Oleg Zhurakousky
f549ae069c Prep doc POM instructions for release 2019-05-07 13:38:19 +02:00
Oleg Zhurakousky
2cc60a744d Merge pull request #649 from sobychacko/gh-633
Adding docs for Kafka Streams functional support
2019-05-01 21:42:36 +02:00
Oleg Zhurakousky
2ddc837c1a polish
Resolves #648
2019-05-01 21:38:59 +02:00
Soby Chacko
816a4ec232 Kafka stream outbound content type polishing
Using Jackson to write content type as bytes so that the outgoing
content type string is properly json compatible.
2019-05-01 21:29:14 +02:00
Soby Chacko
1a19eeabec Content type not carried on the outbound (Kafka Streams)
Fixing the issue of content type not propagated as a Kafka header
on the outbound during message serialization by Kafka Streams binder.

For more info, see these links:

https://stackoverflow.com/questions/55796503/spring-cloud-stream-kafka-application-not-generating-messages-with-the-correct-a
https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/456#issuecomment-439963419

Resolves #456
2019-05-01 21:29:14 +02:00
Soby Chacko
d386ff7923 Make KeyValueSerdeResolver public
Resolves #644
Resolves #645
2019-05-01 21:25:23 +02:00
Soby Chacko
9644f2dbbf Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636
2019-05-01 21:13:33 +02:00
Oleg Zhurakousky
498998f1a4 polishing
Resolves #643
2019-05-01 21:12:40 +02:00
Soby Chacko
5558f7f7fc Custom state stores with functional Kafka Streams
Introducing the ability to provide custom state stores as regular
Spring beans and use them in the functional model of Kafka Streams binding.

Resolves #642
2019-05-01 21:00:47 +02:00
Oleg Zhurakousky
dec9ff696f polishing
Resolves #637
2019-05-01 20:26:52 +02:00
Soby Chacko
5014ab8fe1 Kafka Streams multiple function issues
Fixing a bug around when we have muliple beans defined as functions/csonumers in the new
Kafka Streams binder functional support.

Polishing

Resolves #636

Filter in only Kafka streams function beans
2019-05-01 20:26:52 +02:00
Sabby Anandan
3d3f02b6cc Switch to KafkaStreamsProcessor
It appears we have refactored to rename the class from `KStreamProcessor` to `KafkaStreamsProcessor` [see a5344655cb (diff-4a8582ee2d07e268f77a89c0633e42f5)], but the docs weren't updated. This commit does exactly that.
2019-04-30 13:23:44 -07:00
Soby Chacko
7790d4b196 Addressing PR review comments 2019-04-29 19:29:46 -04:00
Soby Chacko
602aed8a4d Adding docs for Kafka Streams functional support
Resovles #633
2019-04-27 20:46:27 -04:00
Guillaume Lemont
c1f4cf9dc6 Fix message listener container bean naming
With multiplex topics, topics can be an array. Sine message listener container
bean name is created by calling a toString() on the topics, when it is an array
it creates an unusual bean name. Fixing the issue by creating the bean name by
using destination string directly.
2019-04-26 15:11:20 -04:00
buildmaster
c990140452 Going back to snapshots 2019-04-09 18:10:36 +00:00
buildmaster
383add504a Update SNAPSHOT to 2.2.0.RC1 2019-04-09 18:09:51 +00:00
Oleg Zhurakousky
5f9395a5ec Prepared docs for RC1 2019-04-09 19:23:29 +02:00
Soby Chacko
70eb25d413 Fix failing test 2019-04-09 13:02:57 -04:00
Oleg Zhurakousky
6bc74c6e5c Doc changes related to Rabbit's 'GH-193 Added note on default properties' 2019-04-09 18:56:15 +02:00
Soby Chacko
33603c62f0 Transactional binder producer factory
With a transactional binder, the producer factory should not be destroyed.

Resolves #626
2019-04-08 15:27:06 -04:00
Anshul Mehra
efd46835a1 GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration (#562)
* GH-525: Ignore enable.auto.commit and group.id from merged consumer configuration

Resolves #525

* Update warning messages to be more explicit
2019-04-08 12:44:52 -04:00
Oleg Zhurakousky
3a267bc751 Merge pull request #620 from sobychacko/gh-589
Bean name conflicts (Kafka Streams binder)
2019-03-28 18:33:01 +01:00
Soby Chacko
62e98df0c7 Bean name conflicts (Kafka Streams binder)
When two processors with same name are present in the same application,
there is a bean creation conflict. Fixing that issue.

Add test to verify.
Modify existing tests.

Resolves #589
2019-03-27 16:30:22 -04:00
Matthieu Ghilain
9e156911b4 Fixing typo in documentation
Resolves #619
2019-03-26 18:49:49 +01:00
Oleg Zhurakousky
cd28454818 Updated docs pom back to snapshot 2019-03-25 18:46:05 +01:00
85 changed files with 4191 additions and 2246 deletions

View File

@@ -132,7 +132,7 @@ If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -162,6 +162,9 @@ Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
@@ -284,6 +287,9 @@ Default: none (the binder-wide default of 1 is used).
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
@@ -311,7 +317,8 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
@@ -341,7 +348,21 @@ The replication factor to use when provisioning topics. Overrides the binder-wid
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -516,6 +537,50 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
[[kafka-error-channels]]
=== Error Channels

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
</parent>
<packaging>pom</packaging>
<name>spring-cloud-stream-binder-kafka-docs</name>
@@ -15,311 +15,50 @@
<properties>
<docs.main>spring-cloud-stream-binder-kafka</docs.main>
<main.basedir>${basedir}/..</main.basedir>
<spring-doc-resources.version>0.1.1.RELEASE</spring-doc-resources.version>
<spring-asciidoctor-extensions.version>0.1.0.RELEASE
</spring-asciidoctor-extensions.version>
<asciidoctorj-pdf.version>1.5.0-alpha.16</asciidoctorj-pdf.version>
<maven.plugin.plugin.version>3.4</maven.plugin.plugin.version>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.8.2</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>docs</id>
<build>
<plugins>
<plugin>
<groupId>pl.project13.maven</groupId>
<artifactId>git-commit-id-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>${maven-dependency-plugin.version}</version>
<inherited>false</inherited>
<executions>
<execution>
<id>unpack-docs</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>org.springframework.cloud
</groupId>
<artifactId>spring-cloud-build-docs
</artifactId>
<version>${spring-cloud-build.version}
</version>
<classifier>sources</classifier>
<type>jar</type>
<overWrite>false</overWrite>
<outputDirectory>${docs.resources.dir}
</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
<execution>
<id>unpack-docs-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>io.spring.docresources</groupId>
<artifactId>spring-doc-resources</artifactId>
<version>${spring-doc-resources.version}</version>
<type>zip</type>
<overWrite>true</overWrite>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<executions>
<execution>
<id>copy-asciidoc-resources</id>
<phase>generate-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/refdocs/</outputDirectory>
<resources>
<resource>
<directory>src/main/asciidoc</directory>
<filtering>false</filtering>
<excludes>
<exclude>ghpages.sh</exclude>
</excludes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<version>${asciidoctor-maven-plugin.version}</version>
<inherited>false</inherited>
<dependencies>
<dependency>
<groupId>io.spring.asciidoctor</groupId>
<artifactId>spring-asciidoctor-extensions</artifactId>
<version>${spring-asciidoctor-extensions.version}</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj-pdf</artifactId>
<version>${asciidoctorj-pdf.version}</version>
</dependency>
</dependencies>
<configuration>
<sourceDirectory>${project.build.directory}/refdocs/</sourceDirectory>
<sourceDocumentName>${docs.main}.adoc</sourceDocumentName>
<attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
<!-- <docs-url>https://cloud.spring.io/</docs-url> -->
<!-- <docs-version></docs-version> -->
<docs-version>${project.version}/</docs-version>
<docs-url>https://cloud.spring.io/spring-cloud-static/</docs-url>
</attributes>
<spring-cloud-stream-version>${project.version}</spring-cloud-stream-version>
</attributes>
</configuration>
<executions>
<execution>
<id>generate-html-documentation</id>
<phase>prepare-package</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
<configuration>
<backend>html5</backend>
<sourceHighlighter>highlight.js</sourceHighlighter>
<doctype>book</doctype>
<attributes>
// these attributes are required to use the doc resources
<docinfo>shared</docinfo>
<stylesdir>css/</stylesdir>
<stylesheet>spring.css</stylesheet>
<linkcss>true</linkcss>
<icons>font</icons>
<highlightjsdir>js/highlight</highlightjsdir>
<highlightjs-theme>atom-one-dark-reasonable</highlightjs-theme>
<allow-uri-read>true</allow-uri-read>
<nofooter />
<toc>left</toc>
<toc-levels>4</toc-levels>
<spring-cloud-version>${project.version}</spring-cloud-version>
<sectlinks>true</sectlinks>
</attributes>
<outputFile>${docs.main}.html</outputFile>
</configuration>
</execution>
<execution>
<id>generate-docbook</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
<execution>
<id>generate-index</id>
<phase>none</phase>
<goals>
<goal>process-asciidoc</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>${maven-antrun-plugin.version}</version>
<dependencies>
<dependency>
<groupId>ant-contrib</groupId>
<artifactId>ant-contrib</artifactId>
<version>1.0b3</version>
<exclusions>
<exclusion>
<groupId>ant</groupId>
<artifactId>ant</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.ant</groupId>
<artifactId>ant-nodeps</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.tigris.antelope</groupId>
<artifactId>antelopetasks</artifactId>
<version>3.2.10</version>
</dependency>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
<version>1.7.17</version>
</dependency>
<dependency>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctorj</artifactId>
<version>1.5.8</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>readme</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<java classname="org.jruby.Main" failonerror="yes">
<arg
value="${docs.resources.dir}/ruby/generate_readme.sh" />
<arg value="-o" />
<arg value="${main.basedir}/README.adoc" />
</java>
</target>
</configuration>
</execution>
<execution>
<id>assert-no-unresolved-links</id>
<phase>prepare-package</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<fileset id="unresolved.file"
dir="${basedir}/target/generated-docs/" includes="**/*.html">
<contains text="Unresolved" />
</fileset>
<fail message="[Unresolved] Found...failing">
<condition>
<resourcecount when="greater" count="0"
refid="unresolved.file" />
</condition>
</fail>
</target>
</configuration>
</execution>
<execution>
<id>setup-maven-properties</id>
<phase>validate</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<exportAntProperties>true</exportAntProperties>
<target>
<taskdef
resource="net/sf/antcontrib/antcontrib.properties" />
<taskdef name="stringutil"
classname="ise.antelope.tasks.StringUtilTask" />
<var name="version-type" value="${project.version}" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp=".*\.(.*)"
replace="\1" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(M)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="(RC)\d+"
replace="MILESTONE" />
<propertyregex property="version-type"
override="true" input="${version-type}" regexp="BUILD-(.*)"
replace="SNAPSHOT" />
<stringutil string="${version-type}"
property="spring-cloud-repo">
<lowercase />
</stringutil>
<var name="github-tag" value="v${project.version}" />
<propertyregex property="github-tag"
override="true" input="${github-tag}" regexp=".*SNAPSHOT"
replace="master" />
</target>
</configuration>
</execution>
<execution>
<id>copy-css</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>generate-documentation-index</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>copy-generated-html</id>
<phase>none</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<inherited>false</inherited>
</plugin>
</plugins>
</build>

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -112,7 +112,7 @@ If set to `true`, the binder creates new topics automatically.
If set to `false`, the binder relies on the topics being already configured.
In the latter case, if the topics do not exist, the binder fails to start.
+
NOTE: This setting is independent of the `auto.topic.create.enable` setting of the broker and does not influence it.
NOTE: This setting is independent of the `auto.create.topics.enable` setting of the broker and does not influence it.
If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.
+
Default: `true`.
@@ -142,6 +142,9 @@ Default: none.
[[kafka-consumer-properties]]
==== Kafka Consumer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka consumers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
@@ -264,6 +267,9 @@ Default: none (the binder-wide default of 1 is used).
[[kafka-producer-properties]]
==== Kafka Producer Properties
NOTE: To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of `spring.cloud.stream.default.<property>=<value>`.
The following properties are available for Kafka producers only and
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
@@ -291,7 +297,8 @@ How long the producer waits to allow more messages to accumulate in the same bat
Default: `0`.
messageKeyExpression::
A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message -- for example, `headers['myKey']`.
The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a `byte[]`.
With versions before 3.0, the payload could not be used unless native encoding was being used because, by the time this expression was evaluated, the payload was already in the form of a `byte[]`.
Now, the expression is evaluated before the payload is converted.
+
Default: `none`.
headerPatterns::
@@ -321,7 +328,21 @@ The replication factor to use when provisioning topics. Overrides the binder-wid
Ignored if `replicas-assignments` is present.
+
Default: none (the binder-wide default of 1 is used).
useTopicHeader::
Set to `true` to override the default binding destination (topic name) with the value of the `KafkaHeaders.TOPIC` message header in the outbound message.
If the header is not present, the default binding destination is used.
Default: `false`.
+
recordMetadataChannel::
The bean name of a `MessageChannel` to which successful send results should be sent; the bean must exist in the application context.
The message sent to the channel is the sent message (after conversion, if any) with an additional header `KafkaHeaders.RECORD_METADATA`.
The header contains a `RecordMetadata` object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
`ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)`
Failed sends go the producer error channel (if configured); see <<kafka-error-channels>>.
Default: null
+
NOTE: The Kafka binder uses the `partitionCount` setting of the producer as a hint to create a topic with the given partition count (in conjunction with the `minPartitionCount`, the maximum of the two being the value being used).
Exercise caution when configuring both `minPartitionCount` for a binder and `partitionCount` for an application, as the larger value is used.
@@ -496,6 +517,50 @@ public class Application {
}
----
[[kafka-transactional-binder]]
=== Transactional Binder
Enable transactions by setting `spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix` to a non-empty value, e.g. `tx-`.
When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction.
When the listener exits normally, the listener container will send the offset to the transaction and commit it.
A common producer factory is used for all producer bindings configured using `spring.cloud.stream.kafka.binder.transaction.producer.*` properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. `@Scheduled` method), you must get a reference to the transactional producer factory and define a `KafkaTransactionManager` bean using it.
====
[source, java]
----
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
----
====
Notice that we get a reference to the binder using the `BinderFactory`; use `null` in the first argument when there is only one binder configured.
If more than one binder is configured, use the binder name to get the reference.
Once we have a reference to the binder, we can obtain a reference to the `ProducerFactory` and create a transaction manager.
Then you would use normal Spring transaction support, e.g. `TransactionTemplate` or `@Transactional`, for example:
====
[source, java]
----
public static class Sender {
@Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
----
====
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a `ChainedTransactionManager`.
[[kafka-error-channels]]
=== Error Channels

View File

@@ -34,8 +34,6 @@ Sabby Anandan, Marius Bogoevici, Eric Bottard, Mark Fisher, Ilayaperumal Gopinat
*{spring-cloud-stream-version}*
[#index-link]
{docs-url}spring-cloud-stream/{docs-version}home.html
= Reference Guide
include::overview.adoc[]

12
pom.xml
View File

@@ -2,20 +2,20 @@
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
<packaging>pom</packaging>
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-build</artifactId>
<version>2.1.4.RELEASE</version>
<version>2.2.0.M4</version>
<relativePath />
</parent>
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>2.2.2.RELEASE</spring-kafka.version>
<spring-integration-kafka.version>3.1.0.RELEASE</spring-integration-kafka.version>
<kafka.version>2.0.0</kafka.version>
<spring-cloud-stream.version>2.2.0.M1</spring-cloud-stream.version>
<spring-kafka.version>2.3.0.M4</spring-kafka.version>
<spring-integration-kafka.version>3.2.0.M4</spring-integration-kafka.version>
<kafka.version>2.3.0</kafka.version>
<spring-cloud-stream.version>3.0.0.M3</spring-cloud-stream.version>
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>

View File

@@ -4,7 +4,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
</parent>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<description>Spring Cloud Starter Stream Kafka</description>

View File

@@ -5,7 +5,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
</parent>
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
<description>Spring Cloud Stream Kafka Binder Core</description>

View File

@@ -25,6 +25,8 @@ import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
@@ -56,6 +58,8 @@ public class KafkaBinderConfigurationProperties {
private static final String DEFAULT_KAFKA_CONNECTION_STRING = "localhost:9092";
private final Log logger = LogFactory.getLog(getClass());
private final Transaction transaction = new Transaction();
private final KafkaProperties kafkaProperties;
@@ -529,6 +533,7 @@ public class KafkaBinderConfigurationProperties {
}
}
consumerConfiguration.putAll(this.consumerProperties);
filterStreamManagedConfiguration(consumerConfiguration);
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
return getConfigurationWithBootstrapServer(consumerConfiguration,
@@ -559,6 +564,25 @@ public class KafkaBinderConfigurationProperties {
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG);
}
private void filterStreamManagedConfiguration(Map<String, Object> configuration) {
if (configuration.containsKey(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG)
&& configuration.get(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG).equals(true)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG) +
ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG + "=true is not supported by the Kafka binder");
configuration.remove(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
if (configuration.containsKey(ConsumerConfig.GROUP_ID_CONFIG)) {
logger.warn(constructIgnoredConfigMessage(ConsumerConfig.GROUP_ID_CONFIG) +
"Use spring.cloud.stream.default.group or spring.cloud.stream.binding.<name>.group to specify " +
"the group instead of " + ConsumerConfig.GROUP_ID_CONFIG);
configuration.remove(ConsumerConfig.GROUP_ID_CONFIG);
}
}
private String constructIgnoredConfigMessage(String config) {
return String.format("Ignoring provided value(s) for '%s'. ", config);
}
private Map<String, Object> getConfigurationWithBootstrapServer(
Map<String, Object> configuration, String bootstrapServersConfig) {
if (ObjectUtils.isEmpty(configuration.get(bootstrapServersConfig))) {

View File

@@ -50,6 +50,10 @@ public class KafkaProducerProperties {
private KafkaTopicProperties topic = new KafkaTopicProperties();
private boolean useTopicHeader;
private String recordMetadataChannel;
public int getBufferSize() {
return this.bufferSize;
}
@@ -138,6 +142,22 @@ public class KafkaProducerProperties {
this.topic = topic;
}
public boolean isUseTopicHeader() {
return this.useTopicHeader;
}
public void setUseTopicHeader(boolean useTopicHeader) {
this.useTopicHeader = useTopicHeader;
}
public String getRecordMetadataChannel() {
return this.recordMetadataChannel;
}
public void setRecordMetadataChannel(String recordMetadataChannel) {
this.recordMetadataChannel = recordMetadataChannel;
}
/**
* Enumeration for compression types.
*/

View File

@@ -466,11 +466,11 @@ public class KafkaTopicProvisioner implements
// In some cases, the above partition query may not throw an UnknownTopic..Exception for various reasons.
// For that, we are forcing another query to ensure that the topic is present on the server.
if (CollectionUtils.isEmpty(partitions)) {
final AdminClient adminClient = AdminClient
.create(this.adminClientProperties);
final DescribeTopicsResult describeTopicsResult = adminClient
try (AdminClient adminClient = AdminClient
.create(this.adminClientProperties)) {
final DescribeTopicsResult describeTopicsResult = adminClient
.describeTopics(Collections.singletonList(topicName));
try {
describeTopicsResult.all().get();
}
catch (ExecutionException ex) {

View File

@@ -0,0 +1,108 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.properties;
import java.util.Collections;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.junit.Test;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaBinderConfigurationPropertiesTest {
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setGroupId("group1");
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
kafkaProperties.getConsumer().setEnableAutoCommit(true);
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
Map<String, Object> mergedConsumerConfiguration =
kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConfiguration() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConfiguration(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersGroupIdFromKafkaBinderConfigurationPropertiesConsumerProperties() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.GROUP_ID_CONFIG, "group1"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
@Test
public void mergedConsumerConfigurationFiltersEnableAutoCommitFromKafkaBinderConfigurationPropertiesConsumerProps() {
KafkaProperties kafkaProperties = new KafkaProperties();
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties =
new KafkaBinderConfigurationProperties(kafkaProperties);
kafkaBinderConfigurationProperties
.setConsumerProperties(Collections.singletonMap(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"));
Map<String, Object> mergedConsumerConfiguration = kafkaBinderConfigurationProperties.mergedConsumerConfiguration();
assertThat(mergedConsumerConfiguration).doesNotContainKeys(ConsumerConfig.GROUP_ID_CONFIG);
}
}

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
</parent>
<properties>
@@ -74,13 +74,13 @@
<!-- Following dependencies are needed to support Kafka 1.1.0 client-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>

View File

@@ -0,0 +1,288 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import java.util.Properties;
import java.util.UUID;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 3.0.0
*/
public abstract class AbstractKafkaStreamsBinderProcessor implements ApplicationContextAware {
private static final Log LOG = LogFactory.getLog(AbstractKafkaStreamsBinderProcessor.class);
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final BindingServiceProperties bindingServiceProperties;
private final KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties;
private final CleanupConfig cleanupConfig;
private final KeyValueSerdeResolver keyValueSerdeResolver;
protected ConfigurableApplicationContext applicationContext;
public AbstractKafkaStreamsBinderProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver, CleanupConfig cleanupConfig) {
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.cleanupConfig = cleanupConfig;
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder,
String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
protected <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
protected GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
protected KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
protected Topology.AutoOffsetReset getAutoOffsetReset(String inboundName, KafkaStreamsConsumerProperties extendedConsumerProperties) {
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
AbstractKafkaStreamsBinderProcessor.LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
return autoOffsetReset;
}
@SuppressWarnings("unchecked")
protected void handleKTableGlobalKTableInputs(Object[] arguments, int index, String input, Class<?> parameterType, Object targetBean,
StreamsBuilderFactoryBean streamsBuilderFactoryBean, StreamsBuilder streamsBuilder,
KafkaStreamsConsumerProperties extendedConsumerProperties,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[index] = table;
}
}
@SuppressWarnings({"unchecked"})
protected StreamsBuilderFactoryBean buildStreamsBuilderAndRetrieveConfig(String beanNamePostPrefix,
ApplicationContext applicationContext, String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String bindingLevelApplicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
if (StringUtils.hasText(bindingLevelApplicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
bindingLevelApplicationId);
}
//If the application id is not set by any mechanism, then generate it.
streamConfigGlobalProperties.computeIfAbsent(StreamsConfig.APPLICATION_ID_CONFIG,
k -> {
String generatedApplicationID = beanNamePostPrefix + "-" + UUID.randomUUID().toString() + "-applicationId";
LOG.info("Generated Kafka Streams Application ID: " + generatedApplicationID);
return generatedApplicationID;
});
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
.getBean("kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(
streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(),
() -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + beanNamePostPrefix, streamsBuilderBeanDefinition);
return applicationContext.getBean(
"&stream-builder-" + beanNamePostPrefix, StreamsBuilderFactoryBean.class);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
}
protected KStream<?, ?> getkStream(BindingProperties bindingProperties, KStream<?, ?> stream, boolean nativeDecoding) {
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
protected Serde<?> getValueSerde(String inboundName, KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, ResolvableType resolvableType) {
if (bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding()) {
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
return this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), kafkaStreamsConsumerProperties, resolvableType);
}
else {
return Serdes.ByteArray();
}
}
}

View File

@@ -118,4 +118,9 @@ public class GlobalKTableBinder extends
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -22,10 +22,10 @@ import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.annotation.BindingProvider;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -36,30 +36,18 @@ import org.springframework.context.annotation.Configuration;
* @since 2.1.0
*/
@Configuration
@BindingProvider
public class GlobalKTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
return KafkaStreamsBinderUtils.outerContextBeanFactoryPostProcessor();
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@@ -68,9 +56,13 @@ public class GlobalKTableBinderConfiguration {
public GlobalKTableBinder GlobalKTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
return new GlobalKTableBinder(binderConfigurationProperties,
GlobalKTableBinder globalKTableBinder = new GlobalKTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
globalKTableBinder.setKafkaStreamsExtendedBindingProperties(
kafkaStreamsExtendedBindingProperties);
return globalKTableBinder;
}
}

View File

@@ -23,6 +23,7 @@ import org.apache.kafka.streams.kstream.GlobalKTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
@@ -47,8 +48,12 @@ public class GlobalKTableBoundElementFactory
@Override
public GlobalKTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);

View File

@@ -1,5 +1,5 @@
/*
* Copyright 2017-2018 the original author or authors.
* Copyright 2017-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
@@ -21,9 +21,11 @@ import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Produced;
import org.springframework.aop.framework.Advised;
import org.springframework.cloud.stream.binder.AbstractBinder;
import org.springframework.cloud.stream.binder.BinderSpecificPropertiesProvider;
import org.springframework.cloud.stream.binder.Binding;
@@ -92,11 +94,13 @@ class KStreamBinder extends
@Override
protected Binding<KStream<Object, Object>> doBindConsumer(String name, String group,
KStream<Object, Object> inputTarget,
// @checkstyle:off
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties) {
// @checkstyle:on
this.kafkaStreamsBindingInformationCatalogue
.registerConsumerProperties(inputTarget, properties.getExtension());
KStream<Object, Object> delegate = ((KStreamBoundElementFactory.KStreamWrapperHandler)
((Advised) inputTarget).getAdvisors()[0].getAdvice()).getDelegate();
this.kafkaStreamsBindingInformationCatalogue.registerConsumerProperties(delegate, properties.getExtension());
if (!StringUtils.hasText(group)) {
group = this.binderConfigurationProperties.getApplicationId();
}
@@ -112,17 +116,25 @@ class KStreamBinder extends
@SuppressWarnings("unchecked")
protected Binding<KStream<Object, Object>> doBindProducer(String name,
KStream<Object, Object> outboundBindTarget,
// @checkstyle:off
ExtendedProducerProperties<KafkaStreamsProducerProperties> properties) {
// @checkstyle:on
ExtendedProducerProperties<KafkaProducerProperties> extendedProducerProperties = new ExtendedProducerProperties<>(
new KafkaProducerProperties());
this.kafkaTopicProvisioner.provisionProducerDestination(name,
extendedProducerProperties);
Serde<?> keySerde = this.keyValueSerdeResolver
.getOuboundKeySerde(properties.getExtension());
Serde<?> valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension());
.getOuboundKeySerde(properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable());
LOG.info("Key Serde used for (outbound) " + name + ": " + keySerde.getClass().getName());
Serde<?> valueSerde;
if (properties.isUseNativeEncoding()) {
valueSerde = this.keyValueSerdeResolver.getOutboundValueSerde(properties,
properties.getExtension(), kafkaStreamsBindingInformationCatalogue.getOutboundKStreamResolvable());
}
else {
valueSerde = Serdes.ByteArray();
}
LOG.info("Key Serde used for (outbound) " + name + ": " + valueSerde.getClass().getName());
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
(Serde<Object>) keySerde, (Serde<Object>) valueSerde);
return new DefaultBinding<>(name, null, outboundBindTarget, null);

View File

@@ -23,6 +23,7 @@ import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.annotation.BindingProvider;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
@@ -41,6 +42,7 @@ import org.springframework.context.annotation.Import;
@Configuration
@Import({ KafkaAutoConfiguration.class,
KafkaStreamsBinderHealthIndicatorConfiguration.class })
@BindingProvider
public class KStreamBinderConfiguration {
@Bean

View File

@@ -22,6 +22,7 @@ import org.apache.kafka.streams.kstream.KStream;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
@@ -52,8 +53,12 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
public KStream createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);
return createProxyForKStream(name);
@@ -62,6 +67,13 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
@Override
@SuppressWarnings("unchecked")
public KStream createOutput(final String name) {
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ProducerProperties producerProperties = bindingProperties.getProducer();
if (producerProperties == null) {
producerProperties = this.bindingServiceProperties.getProducerProperties(name);
producerProperties.setUseNativeEncoding(true);
}
return createProxyForKStream(name);
}
@@ -90,7 +102,7 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
private static class KStreamWrapperHandler
static class KStreamWrapperHandler
implements KStreamWrapper, MethodInterceptor {
private KStream<Object, Object> delegate;
@@ -121,6 +133,9 @@ class KStreamBoundElementFactory extends AbstractBindingTargetFactory<KStream> {
}
}
KStream<Object, Object> getDelegate() {
return delegate;
}
}
}

View File

@@ -44,8 +44,7 @@ class KStreamStreamListenerParameterAdapter
@Override
public boolean supports(Class bindingTargetType, MethodParameter methodParameter) {
return KStream.class.isAssignableFrom(bindingTargetType)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
return KafkaStreamsBinderUtils.supportsKStream(methodParameter, bindingTargetType);
}
@Override

View File

@@ -122,4 +122,8 @@ class KTableBinder extends
.getExtendedPropertiesEntryClass();
}
public void setKafkaStreamsExtendedBindingProperties(
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties) {
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
}
}

View File

@@ -22,10 +22,10 @@ import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.annotation.BindingProvider;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -36,30 +36,18 @@ import org.springframework.context.annotation.Configuration;
*/
@SuppressWarnings("ALL")
@Configuration
@BindingProvider
public class KTableBinderConfiguration {
@Bean
@ConditionalOnBean(name = "outerContext")
public static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered as first
// and as independent from the parent context.
ApplicationContext outerContext = (ApplicationContext) beanFactory
.getBean("outerContext");
beanFactory.registerSingleton(
KafkaStreamsBinderConfigurationProperties.class.getSimpleName(),
outerContext
.getBean(KafkaStreamsBinderConfigurationProperties.class));
beanFactory.registerSingleton(
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
return KafkaStreamsBinderUtils.outerContextBeanFactoryPostProcessor();
}
@Bean
public KafkaTopicProvisioner provisioningProvider(
KafkaBinderConfigurationProperties binderConfigurationProperties,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaProperties kafkaProperties) {
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
}
@@ -68,10 +56,12 @@ public class KTableBinderConfiguration {
public KTableBinder kTableBinder(
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
KafkaTopicProvisioner kafkaTopicProvisioner,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@Qualifier("kafkaStreamsDlqDispatchers") Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers) {
KTableBinder kStreamBinder = new KTableBinder(binderConfigurationProperties,
KTableBinder kTableBinder = new KTableBinder(binderConfigurationProperties,
kafkaTopicProvisioner, kafkaStreamsDlqDispatchers);
return kStreamBinder;
kTableBinder.setKafkaStreamsExtendedBindingProperties(kafkaStreamsExtendedBindingProperties);
return kTableBinder;
}
}

View File

@@ -23,6 +23,7 @@ import org.apache.kafka.streams.kstream.KTable;
import org.springframework.aop.framework.ProxyFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binding.AbstractBindingTargetFactory;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.util.Assert;
@@ -45,8 +46,12 @@ class KTableBoundElementFactory extends AbstractBindingTargetFactory<KTable> {
@Override
public KTable createInput(String name) {
ConsumerProperties consumerProperties = this.bindingServiceProperties
.getConsumerProperties(name);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(name);
ConsumerProperties consumerProperties = bindingProperties.getConsumer();
if (consumerProperties == null) {
consumerProperties = this.bindingServiceProperties.getConsumerProperties(name);
consumerProperties.setUseNativeDecoding(true);
}
// Always set multiplex to true in the kafka streams binder
consumerProperties.setMultiplex(true);

View File

@@ -27,10 +27,12 @@ import org.springframework.context.annotation.Configuration;
/**
* Application support configuration for Kafka Streams binder.
*
* @deprecated Features provided on this class can be directly configured in the application itself using Kafka Streams.
* @author Soby Chacko
*/
@Configuration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@Deprecated
public class KafkaStreamsApplicationSupportAutoConfiguration {
@Bean

View File

@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.stream.Collectors;
@@ -31,28 +32,30 @@ import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.AutoConfigureAfter;
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.stream.binder.BinderConfiguration;
import org.springframework.cloud.stream.binder.kafka.streams.function.FunctionDetectorCondition;
import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binder.kafka.streams.serde.CompositeNonNativeSerde;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.BindingService;
import org.springframework.cloud.stream.binding.StreamListenerResultAdapter;
import org.springframework.cloud.stream.config.BinderProperties;
import org.springframework.cloud.stream.config.BindingServiceConfiguration;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.Environment;
import org.springframework.core.env.MapPropertySource;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;
@@ -149,14 +152,30 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean("streamConfigGlobalProperties")
public Map<String, Object> streamConfigGlobalProperties(
KafkaStreamsBinderConfigurationProperties configProperties,
KafkaStreamsConfiguration kafkaStreamsConfiguration) {
KafkaStreamsConfiguration kafkaStreamsConfiguration, ConfigurableEnvironment environment) {
Properties properties = kafkaStreamsConfiguration.asProperties();
// Override Spring Boot bootstrap server setting if left to default with the value
// configured in the binder
String kafkaConnectionString = configProperties.getKafkaConnectionString();
if (kafkaConnectionString != null && kafkaConnectionString.equals("localhost:9092")) {
//Making sure that the application indeed set a property.
String kafkaStreamsBinderBroker = environment.getProperty("spring.cloud.stream.kafka.streams.binder.brokers");
if (StringUtils.isEmpty(kafkaStreamsBinderBroker)) {
//Kafka Streams binder specific property for brokers is not set by the application.
//See if there is one configured at the kafka binder level.
String kafkaBinderBroker = environment.getProperty("spring.cloud.stream.kafka.binder.brokers");
if (!StringUtils.isEmpty(kafkaBinderBroker)) {
kafkaConnectionString = kafkaBinderBroker;
configProperties.setBrokers(kafkaConnectionString);
}
}
}
if (ObjectUtils.isEmpty(properties.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG))) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
else {
Object bootstrapServerConfig = properties
@@ -167,7 +186,14 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
.get(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG);
if (bootStrapServers.equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
configProperties.getKafkaConnectionString());
kafkaConnectionString);
}
}
else if (bootstrapServerConfig instanceof List) {
List bootStrapCollection = (List) bootstrapServerConfig;
if (bootStrapCollection.size() == 1 && bootStrapCollection.get(0).equals("localhost:9092")) {
properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaConnectionString);
}
}
}
@@ -240,33 +266,33 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
}
@Bean
// @ConditionalOnProperty("spring.cloud.stream.kafka.streams.function.definition")
@ConditionalOnProperty("spring.cloud.stream.function.definition")
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
KeyValueSerdeResolver keyValueSerdeResolver,
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
ObjectProvider<CleanupConfig> cleanupConfig,
FunctionCatalog functionCatalog, BindableProxyFactory bindableProxyFactory) {
KafkaStreamsBindableProxyFactory bindableProxyFactory,
StreamFunctionProperties streamFunctionProperties) {
return new KafkaStreamsFunctionProcessor(bindingServiceProperties, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, kafkaStreamsBindingInformationCatalogue, kafkaStreamsMessageConversionDelegate,
cleanupConfig.getIfUnique(), functionCatalog, bindableProxyFactory);
cleanupConfig.getIfUnique(), bindableProxyFactory, streamFunctionProperties);
}
@Bean
public KafkaStreamsMessageConversionDelegate messageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties) {
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverterFactory, sendToDlqAndContinue,
return new KafkaStreamsMessageConversionDelegate(compositeMessageConverter, sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue, binderConfigurationProperties);
}
@Bean
public CompositeNonNativeSerde compositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
CompositeMessageConverter compositeMessageConverterFactory) {
return new CompositeNonNativeSerde(compositeMessageConverterFactory);
}
@@ -302,6 +328,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
@Bean
@SuppressWarnings("unchecked")
@ConditionalOnMissingBean
public KeyValueSerdeResolver keyValueSerdeResolver(
@Qualifier("streamConfigGlobalProperties") Object streamConfigGlobalProperties,
KafkaStreamsBinderConfigurationProperties properties) {
@@ -309,12 +336,6 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
(Map<String, Object>) streamConfigGlobalProperties, properties);
}
@Bean
public QueryableStoreRegistry queryableStoreTypeRegistry(
KafkaStreamsRegistry kafkaStreamsRegistry) {
return new QueryableStoreRegistry(kafkaStreamsRegistry);
}
@Bean
public InteractiveQueryService interactiveQueryServices(
KafkaStreamsRegistry kafkaStreamsRegistry,

View File

@@ -18,12 +18,17 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.util.StringUtils;
/**
@@ -83,4 +88,24 @@ final class KafkaStreamsBinderUtils {
}
}
static boolean supportsKStream(MethodParameter methodParameter, Class<?> targetBeanClass) {
return KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
}
static BeanFactoryPostProcessor outerContextBeanFactoryPostProcessor() {
return (beanFactory) -> {
// It is safe to call getBean("outerContext") here, because this bean is
// registered first and is independent from the parent context.
GenericApplicationContext outerContext = (GenericApplicationContext) beanFactory
.getBean("outerContext");
outerContext.registerBean(KafkaStreamsBinderConfigurationProperties.class,
() -> outerContext.getBean(KafkaStreamsBinderConfigurationProperties.class));
outerContext.registerBean(KafkaStreamsBindingInformationCatalogue.class,
() -> outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
};
}
}

View File

@@ -16,17 +16,20 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
@@ -46,6 +49,10 @@ class KafkaStreamsBindingInformationCatalogue {
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
private ResolvableType outboundKStreamResolvable;
private final Map<KStream<?, ?>, Serde<?>> keySerdeInfo = new HashMap<>();
/**
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
* broker.
@@ -122,4 +129,36 @@ class KafkaStreamsBindingInformationCatalogue {
return this.streamsBuilderFactoryBeans;
}
void setOutboundKStreamResolvable(ResolvableType outboundResolvable) {
this.outboundKStreamResolvable = outboundResolvable;
}
ResolvableType getOutboundKStreamResolvable() {
return outboundKStreamResolvable;
}
/**
* Adding a mapping for KStream target to its corresponding KeySerde.
* This is used for sending to DLQ when deserialization fails. See {@link KafkaStreamsMessageConversionDelegate}
* for details.
*
* @param kStreamTarget target KStream
* @param keySerde Serde used for the key
*/
void addKeySerde(KStream<?, ?> kStreamTarget, Serde<?> keySerde) {
this.keySerdeInfo.put(kStreamTarget, keySerde);
}
Serde<?> getKeySerde(KStream<?, ?> kStreamTarget) {
return this.keySerdeInfo.get(kStreamTarget);
}
Map<KStream<?, ?>, BindingProperties> getBindingProperties() {
return bindingProperties;
}
Map<KStream<?, ?>, KafkaStreamsConsumerProperties> getConsumerProperties() {
return consumerProperties;
}
}

View File

@@ -16,63 +16,56 @@
package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.TreeSet;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.function.context.FunctionCatalog;
import org.springframework.cloud.function.core.FluxedFunction;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsBindableProxyFactory;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.StreamListenerErrorMessages;
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.StringUtils;
/**
* @author Soby Chacko
* @since 2.2.0
*/
public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderProcessor implements BeanFactoryAware {
private static final Log LOG = LogFactory.getLog(KafkaStreamsFunctionProcessor.class);
@@ -82,12 +75,14 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
private final KeyValueSerdeResolver keyValueSerdeResolver;
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate;
private final CleanupConfig cleanupConfig;
private final FunctionCatalog functionCatalog;
private final BindableProxyFactory bindableProxyFactory;
private ConfigurableApplicationContext applicationContext;
private Set<String> origInputs = new LinkedHashSet<>();
private Set<String> origOutputs = new LinkedHashSet<>();
private ResolvableType outboundResolvableType;
private KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory;
private BeanFactory beanFactory;
private StreamFunctionProperties streamFunctionProperties;
public KafkaStreamsFunctionProcessor(BindingServiceProperties bindingServiceProperties,
KafkaStreamsExtendedBindingProperties kafkaStreamsExtendedBindingProperties,
@@ -95,67 +90,114 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
CleanupConfig cleanupConfig,
FunctionCatalog functionCatalog,
BindableProxyFactory bindableProxyFactory) {
KafkaStreamsBindableProxyFactory bindableProxyFactory,
StreamFunctionProperties streamFunctionProperties) {
super(bindingServiceProperties, kafkaStreamsBindingInformationCatalogue, kafkaStreamsExtendedBindingProperties,
keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = kafkaStreamsExtendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
this.cleanupConfig = cleanupConfig;
this.functionCatalog = functionCatalog;
this.bindableProxyFactory = bindableProxyFactory;
this.kafkaStreamsBindableProxyFactory = bindableProxyFactory;
this.origInputs.addAll(bindableProxyFactory.getInputs());
this.origOutputs.addAll(bindableProxyFactory.getOutputs());
this.streamFunctionProperties = streamFunctionProperties;
}
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType) {
final Set<String> inputs = new TreeSet<>(this.bindableProxyFactory.getInputs());
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
if (resolvableType != null && resolvableType.getRawClass() != null) {
int inputCount = 1;
Map<String, ResolvableType> map = new LinkedHashMap<>();
final Iterator<String> iterator = inputs.iterator();
if (iterator.hasNext()) {
map.put(iterator.next(), resolvableType.getGeneric(0));
ResolvableType generic = resolvableType.getGeneric(1);
while (iterator.hasNext() && generic != null) {
if (generic.getRawClass() != null &&
(generic.getRawClass().equals(Function.class) ||
generic.getRawClass().equals(Consumer.class))) {
map.put(iterator.next(), generic.getGeneric(0));
}
generic = generic.getGeneric(1);
}
}
return map;
}
@SuppressWarnings("unchecked")
public void orchestrateStreamListenerSetupMethod(ResolvableType resolvableType, String functionName) {
final Set<String> outputs = new TreeSet<>(this.bindableProxyFactory.getOutputs());
String[] methodAnnotatedOutboundNames = new String[outputs.size()];
int j = 0;
for (String output : outputs) {
methodAnnotatedOutboundNames[j++] = output;
}
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, "foobar");
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
Consumer<Object> consumer = functionCatalog.lookup(Consumer.class, functionName);
consumer.accept(adaptedInboundArguments[0]);
ResolvableType currentOutputGeneric;
if (resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class)) {
inputCount = 2;
currentOutputGeneric = resolvableType.getGeneric(2);
}
else {
currentOutputGeneric = resolvableType.getGeneric(1);
}
while (currentOutputGeneric != null && currentOutputGeneric.getRawClass() != null
&& (functionOrConsumerFound(currentOutputGeneric))) {
inputCount++;
currentOutputGeneric = currentOutputGeneric.getGeneric(1);
}
Function<Object, Object> function = functionCatalog.lookup(Function.class, functionName);
Object target = null;
if (function instanceof FluxedFunction) {
target = ((FluxedFunction) function).getTarget();
final Set<String> inputs = new LinkedHashSet<>(origInputs);
final Iterator<String> iterator = inputs.iterator();
popuateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
ResolvableType iterableResType = resolvableType;
int i = resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class) ? 2 : 1;
if (i == inputCount) {
outboundResolvableType = iterableResType.getGeneric(i);
}
else {
while (i < inputCount && iterator.hasNext()) {
iterableResType = iterableResType.getGeneric(1);
if (iterableResType.getRawClass() != null &&
functionOrConsumerFound(iterableResType)) {
popuateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
}
i++;
}
outboundResolvableType = iterableResType.getGeneric(1);
}
}
return resolvableTypeMap;
}
private boolean functionOrConsumerFound(ResolvableType iterableResType) {
return iterableResType.getRawClass().equals(Function.class) ||
iterableResType.getRawClass().equals(Consumer.class);
}
private void popuateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap, Iterator<String> iterator) {
final String next = iterator.next();
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
if (resolvableType.getRawClass() != null &&
(resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
resolvableType.getRawClass().isAssignableFrom(BiConsumer.class))
&& iterator.hasNext()) {
resolvableTypeMap.put(iterator.next(), resolvableType.getGeneric(1));
}
origInputs.remove(next);
}
@SuppressWarnings({ "unchecked", "rawtypes" })
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName) {
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType);
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
try {
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
Consumer<Object> consumer = (Consumer) this.beanFactory.getBean(functionName);
Assert.isTrue(consumer != null,
"No corresponding consumer beans found in the catalog");
consumer.accept(adaptedInboundArguments[0]);
}
else if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiConsumer.class)) {
BiConsumer<Object, Object> biConsumer = (BiConsumer) this.beanFactory.getBean(functionName);
Assert.isTrue(biConsumer != null,
"No corresponding biConsumer beans found");
biConsumer.accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Object result;
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiFunction.class)) {
BiFunction<Object, Object, Object> biFunction = (BiFunction) beanFactory.getBean(functionName);
Assert.isTrue(biFunction != null, "Biunction bean cannot be null");
result = biFunction.apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
}
else {
Function<Object, Object> function = (Function) beanFactory.getBean(functionName);
Assert.isTrue(function != null, "Function bean cannot be null");
result = function.apply(adaptedInboundArguments[0]);
}
function = (Function) target;
Object result = function.apply(adaptedInboundArguments[0]);
int i = 1;
while (result instanceof Function || result instanceof Consumer) {
if (result instanceof Function) {
@@ -168,40 +210,73 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
i++;
}
if (result != null) {
kafkaStreamsBindingInformationCatalogue.setOutboundKStreamResolvable(
outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
final Set<String> outputs = new TreeSet<>(origOutputs);
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
if (result.getClass().isArray()) {
Assert.isTrue(methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
if (result.getClass().isArray()) {
// Binding target as the output bindings were deffered in the KafkaStreamsBindableProxyFacotyr
// due to the fact that it didn't know the returned array size. At this point in the execution,
// we know exactly the number of outbound components (from the array length), so do the binding.
final int length = ((Object[]) result).length;
List<String> outputBindings = getOutputBindings(functionName, length);
Iterator<String> iterator = outputBindings.iterator();
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
Object[] outboundKStreams = (Object[]) result;
int k = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[k++]);
for (int ij = 0; ij < length; ij++) {
String next = iterator.next();
this.kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
registry.registerBeanDefinition(next, rootBeanDefinition1);
Object targetBean = this.applicationContext.getBean(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) outboundKStream);
boundElement.wrap((KStream) outboundKStreams[ij]);
}
}
else {
Object targetBean = this.applicationContext.getBean(methodAnnotatedOutboundNames[0]);
if (outboundDefinitionIterator.hasNext()) {
final String next = outboundDefinitionIterator.next();
Object targetBean = this.applicationContext.getBean(next);
this.origOutputs.remove(next);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
KStreamBoundElementFactory.KStreamWrapper
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
boundElement.wrap((KStream) result);
}
}
}
}
}
catch (Exception ex) {
throw new BeanInitializationException("Cannot setup StreamListener for foobar", ex);
throw new BeanInitializationException("Cannot setup function invoker for this Kafka Streams function.", ex);
}
}
private List<String> getOutputBindings(String functionName, int outputs) {
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings().get(functionName);
List<String> outputBindingNames = new ArrayList<>();
if (!CollectionUtils.isEmpty(outputBindings)) {
outputBindingNames.addAll(outputBindings);
return outputBindingNames;
}
else {
for (int i = 0; i < outputs; i++) {
outputBindingNames.add(String.format("%s_%s_%d", functionName, KafkaStreamsBindableProxyFactory.DEFAULT_OUTPUT_SUFFIX, i));
}
}
return outputBindingNames;
}
@SuppressWarnings({"unchecked"})
private Object[] adaptAndRetrieveInboundArguments(Map<String, ResolvableType> stringResolvableTypeMap,
String functionName) {
@@ -211,14 +286,13 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
Class<?> parameterType = stringResolvableTypeMap.get(input).getRawClass();
if (input != null) {
Assert.isInstanceOf(String.class, input, "Annotation value must be a String");
Object targetBean = applicationContext.getBean(input);
BindingProperties bindingProperties = this.bindingServiceProperties.getBindingProperties(input);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
//Retrieve the StreamsConfig created for this method if available.
//Otherwise, create the StreamsBuilderFactory and get the underlying config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(functionName)) {
buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext, input);
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(functionName, applicationContext, input);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean =
@@ -227,29 +301,13 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
KafkaStreamsConsumerProperties extendedConsumerProperties =
this.kafkaStreamsExtendedBindingProperties.getExtendedConsumerProperties(input);
//get state store spec
//KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding " + input + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
Serde<?> keySerde = this.keyValueSerdeResolver.getInboundKeySerde(extendedConsumerProperties, stringResolvableTypeMap.get(input));
LOG.info("Key Serde used for " + input + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(input).isUseNativeDecoding() ?
getValueSerde(input, extendedConsumerProperties, stringResolvableTypeMap.get(input)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + input + ": " + valueSerde.getClass().getName());
final Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(input, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(input, bindingProperties,
@@ -258,6 +316,8 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
(KStreamBoundElementFactory.KStreamWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream) kStreamWrapper, keySerde);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
@@ -277,31 +337,11 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
if (arguments[i] == null) {
arguments[i] = stream;
}
Assert.notNull(arguments[i], "problems..");
Assert.notNull(arguments[i], "Problems encountered while adapting the function argument.");
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper =
(KTableBoundElementFactory.KTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties.getBindingDestination(input);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder, keySerde, valueSerde, materializedAs,
bindingDestination, autoOffsetReset);
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper =
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
//wrap the proxy created during the initial target type binding with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[i] = table;
else {
handleKTableGlobalKTableInputs(arguments, i, input, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset);
}
i++;
}
@@ -316,54 +356,25 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAsGlobalKTable(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset) :
streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde, Serde<?> valueSerde,
String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null ?
materializedAs(streamsBuilder, bindingDestination, materializedAs, keySerde, valueSerde,
autoOffsetReset) :
streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde).withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder, String destination,
String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(StreamsBuilder streamsBuilder,
String destination, String storeName,
Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(String storeName,
Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k)
.withValueSerde(v);
}
private KStream<?, ?> getkStream(String inboundName,
BindingProperties bindingProperties,
StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, Topology.AutoOffsetReset autoOffsetReset) {
try {
final Map<String, StoreBuilder> storeBuilders = applicationContext.getBeansOfType(StoreBuilder.class);
if (!CollectionUtils.isEmpty(storeBuilders)) {
storeBuilders.values().forEach(storeBuilder -> {
streamsBuilder.addStateStore(storeBuilder);
if (LOG.isInfoEnabled()) {
LOG.info("state store " + storeBuilder.name() + " added to topology");
}
});
}
}
catch (Exception e) {
// Pass through.
}
String[] bindingTargets = StringUtils
.commaDelimitedListToStringArray(this.bindingServiceProperties.getBindingDestination(inboundName));
@@ -382,85 +393,11 @@ public class KafkaStreamsFunctionProcessor implements ApplicationContextAware {
"Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType, BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class) || parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
//No framework level message conversion provided for KTable/GlobalKTable, its done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(String functionName, ApplicationContext applicationContext,
String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext.getBean("streamConfigGlobalProperties",
Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
//override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName).getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext.getBean(
"kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration =
new KafkaStreamsConfiguration(streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS, kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration, this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition =
BeanDefinitionBuilder.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(), () -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition("stream-builder-" +
functionName, streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean("&stream-builder-" +
functionName, StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(functionName, streamsBuilderX);
return getkStream(bindingProperties, stream, nativeDecoding);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
}

View File

@@ -19,19 +19,23 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.HashMap;
import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
@@ -53,7 +57,7 @@ public class KafkaStreamsMessageConversionDelegate {
private static final ThreadLocal<KeyValue<Object, Object>> keyValueThreadLocal = new ThreadLocal<>();
private final CompositeMessageConverterFactory compositeMessageConverterFactory;
private final CompositeMessageConverter compositeMessageConverter;
private final SendToDlqAndContinue sendToDlqAndContinue;
@@ -62,11 +66,11 @@ public class KafkaStreamsMessageConversionDelegate {
private final KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties;
KafkaStreamsMessageConversionDelegate(
CompositeMessageConverterFactory compositeMessageConverterFactory,
CompositeMessageConverter compositeMessageConverter,
SendToDlqAndContinue sendToDlqAndContinue,
KafkaStreamsBindingInformationCatalogue kstreamBindingInformationCatalogue,
KafkaStreamsBinderConfigurationProperties kstreamBinderConfigurationProperties) {
this.compositeMessageConverterFactory = compositeMessageConverterFactory;
this.compositeMessageConverter = compositeMessageConverter;
this.sendToDlqAndContinue = sendToDlqAndContinue;
this.kstreamBindingInformationCatalogue = kstreamBindingInformationCatalogue;
this.kstreamBinderConfigurationProperties = kstreamBinderConfigurationProperties;
@@ -77,14 +81,14 @@ public class KafkaStreamsMessageConversionDelegate {
* @param outboundBindTarget outbound KStream target
* @return serialized KStream
*/
@SuppressWarnings("rawtypes")
@SuppressWarnings({"rawtypes", "unchecked"})
public KStream serializeOnOutbound(KStream<?, ?> outboundBindTarget) {
String contentType = this.kstreamBindingInformationCatalogue
.getContentType(outboundBindTarget);
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
return outboundBindTarget.mapValues((v) -> {
final KStream<?, ?> kStreamWithEnrichedHeaders = outboundBindTarget.mapValues((v) -> {
Message<?> message = v instanceof Message<?> ? (Message<?>) v
: MessageBuilder.withPayload(v).build();
Map<String, Object> headers = new HashMap<>(message.getHeaders());
@@ -92,9 +96,46 @@ public class KafkaStreamsMessageConversionDelegate {
headers.put(MessageHeaders.CONTENT_TYPE, contentType);
}
MessageHeaders messageHeaders = new MessageHeaders(headers);
return messageConverter.toMessage(message.getPayload(), messageHeaders)
.getPayload();
final Message<?> convertedMessage = messageConverter.toMessage(message.getPayload(), messageHeaders);
perRecordContentTypeHolder.setContentType((String) messageHeaders.get(MessageHeaders.CONTENT_TYPE));
return convertedMessage.getPayload();
});
kStreamWithEnrichedHeaders.process(() -> new Processor() {
ProcessorContext context;
@Override
public void init(ProcessorContext context) {
this.context = context;
}
@Override
public void process(Object key, Object value) {
if (perRecordContentTypeHolder.contentType != null) {
this.context.headers().remove(MessageHeaders.CONTENT_TYPE);
final Header header;
try {
header = new RecordHeader(MessageHeaders.CONTENT_TYPE,
new ObjectMapper().writeValueAsBytes(perRecordContentTypeHolder.contentType));
this.context.headers().add(header);
}
catch (Exception e) {
if (LOG.isDebugEnabled()) {
LOG.debug("Could not add content type header");
}
}
perRecordContentTypeHolder.unsetContentType();
}
}
@Override
public void close() {
}
});
return kStreamWithEnrichedHeaders;
}
/**
@@ -106,8 +147,7 @@ public class KafkaStreamsMessageConversionDelegate {
@SuppressWarnings({ "unchecked", "rawtypes" })
public KStream deserializeOnInbound(Class<?> valueClass,
KStream<?, ?> bindingTarget) {
MessageConverter messageConverter = this.compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
MessageConverter messageConverter = this.compositeMessageConverter;
final PerRecordContentTypeHolder perRecordContentTypeHolder = new PerRecordContentTypeHolder();
resolvePerRecordContentType(bindingTarget, perRecordContentTypeHolder);
@@ -242,8 +282,14 @@ public class KafkaStreamsMessageConversionDelegate {
String destination = this.context.topic();
if (o2 instanceof Message) {
Message message = (Message) o2;
// We need to convert the key to a byte[] before sending to DLQ.
Serde keySerde = kstreamBindingInformationCatalogue.getKeySerde(bindingTarget);
Serializer keySerializer = keySerde.serializer();
byte[] keyBytes = keySerializer.serialize(null, o);
KafkaStreamsMessageConversionDelegate.this.sendToDlqAndContinue
.sendToDlq(destination, (byte[]) o,
.sendToDlq(destination, keyBytes,
(byte[]) message.getPayload(),
this.context.partition());
}
@@ -283,6 +329,10 @@ public class KafkaStreamsMessageConversionDelegate {
this.contentType = contentType;
}
void unsetContentType() {
this.contentType = null;
}
}
}

View File

@@ -23,12 +23,11 @@ import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.utils.Bytes;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.Topology;
@@ -36,21 +35,12 @@ import org.apache.kafka.streams.kstream.Consumed;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanInitializationException;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsStateStore;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
@@ -62,16 +52,12 @@ import org.springframework.cloud.stream.binding.StreamListenerSetupMethodOrchest
import org.springframework.cloud.stream.config.BindingProperties;
import org.springframework.cloud.stream.config.BindingServiceProperties;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.core.MethodParameter;
import org.springframework.core.ResolvableType;
import org.springframework.core.annotation.AnnotationUtils;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.ObjectUtils;
import org.springframework.util.ReflectionUtils;
@@ -92,8 +78,8 @@ import org.springframework.util.StringUtils;
* @author Lei Chen
* @author Gary Russell
*/
class KafkaStreamsStreamListenerSetupMethodOrchestrator
implements StreamListenerSetupMethodOrchestrator, ApplicationContextAware {
class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStreamsBinderProcessor
implements StreamListenerSetupMethodOrchestrator {
private static final Log LOG = LogFactory
.getLog(KafkaStreamsStreamListenerSetupMethodOrchestrator.class);
@@ -110,13 +96,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
private final Map<Method, List<String>> registeredStoresPerMethod = new HashMap<>();
private final CleanupConfig cleanupConfig;
private ConfigurableApplicationContext applicationContext;
private final Map<Method, StreamsBuilderFactoryBean> methodStreamsBuilderFactoryBeanMap = new HashMap<>();
KafkaStreamsStreamListenerSetupMethodOrchestrator(
BindingServiceProperties bindingServiceProperties,
@@ -126,13 +108,13 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
StreamListenerParameterAdapter streamListenerParameterAdapter,
Collection<StreamListenerResultAdapter> listenerResultAdapters,
CleanupConfig cleanupConfig) {
super(bindingServiceProperties, bindingInformationCatalogue, extendedBindingProperties, keyValueSerdeResolver, cleanupConfig);
this.bindingServiceProperties = bindingServiceProperties;
this.kafkaStreamsExtendedBindingProperties = extendedBindingProperties;
this.keyValueSerdeResolver = keyValueSerdeResolver;
this.kafkaStreamsBindingInformationCatalogue = bindingInformationCatalogue;
this.streamListenerParameterAdapter = streamListenerParameterAdapter;
this.streamListenerResultAdapters = listenerResultAdapters;
this.cleanupConfig = cleanupConfig;
}
@Override
@@ -182,41 +164,33 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
else {
Object result = method.invoke(bean, adaptedInboundArguments);
if (result.getClass().isArray()) {
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[i++]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
if (result.getClass().isArray()) {
Assert.isTrue(
methodAnnotatedOutboundNames.length == ((Object[]) result).length,
"Result does not match with the number of declared outbounds");
}
else {
Assert.isTrue(methodAnnotatedOutboundNames.length == 1,
"Result does not match with the number of declared outbounds");
}
}
else {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(result.getClass(),
targetBean.getClass())) {
streamListenerResultAdapter.adapt(result, targetBean);
break;
kafkaStreamsBindingInformationCatalogue.setOutboundKStreamResolvable(ResolvableType.forMethodReturnType(method));
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
if (result.getClass().isArray()) {
Object[] outboundKStreams = (Object[]) result;
int i = 0;
for (Object outboundKStream : outboundKStreams) {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[i++]);
adaptStreamListenerResult(outboundKStream, targetBean);
}
}
else {
Object targetBean = this.applicationContext
.getBean(methodAnnotatedOutboundNames[0]);
adaptStreamListenerResult(result, targetBean);
}
}
}
}
@@ -226,6 +200,18 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
}
}
@SuppressWarnings("unchecked")
private void adaptStreamListenerResult(Object outboundKStream, Object targetBean) {
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
if (streamListenerResultAdapter.supports(
outboundKStream.getClass(), targetBean.getClass())) {
streamListenerResultAdapter.adapt(outboundKStream,
targetBean);
break;
}
}
}
@Override
@SuppressWarnings({"unchecked"})
public Object[] adaptAndRetrieveInboundArguments(Method method, String inboundName,
@@ -254,13 +240,14 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
.getBean((String) targetReferenceValue);
BindingProperties bindingProperties = this.bindingServiceProperties
.getBindingProperties(inboundName);
enableNativeDecodingForKTableAlways(parameterType, bindingProperties);
// Retrieve the StreamsConfig created for this method if available.
// Otherwise, create the StreamsBuilderFactory and get the underlying
// config.
if (!this.methodStreamsBuilderFactoryBeanMap.containsKey(method)) {
buildStreamsBuilderAndRetrieveConfig(method, applicationContext,
StreamsBuilderFactoryBean streamsBuilderFactoryBean = buildStreamsBuilderAndRetrieveConfig(method.getDeclaringClass().getSimpleName() + "-" + method.getName(),
applicationContext,
inboundName);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderFactoryBean);
}
try {
StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.methodStreamsBuilderFactoryBeanMap
@@ -270,31 +257,16 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
.getExtendedConsumerProperties(inboundName);
// get state store spec
KafkaStreamsStateStoreProperties spec = buildStateStoreSpec(method);
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties);
Serde<?> valueSerde = this.keyValueSerdeResolver.getInboundValueSerde(
bindingProperties.getConsumer(), extendedConsumerProperties);
final KafkaConsumerProperties.StartOffset startOffset = extendedConsumerProperties
.getStartOffset();
Topology.AutoOffsetReset autoOffsetReset = null;
if (startOffset != null) {
switch (startOffset) {
case earliest:
autoOffsetReset = Topology.AutoOffsetReset.EARLIEST;
break;
case latest:
autoOffsetReset = Topology.AutoOffsetReset.LATEST;
break;
default:
break;
}
}
if (extendedConsumerProperties.isResetOffsets()) {
LOG.warn("Detected resetOffsets configured on binding "
+ inboundName + ". "
+ "Setting resetOffsets in Kafka Streams binder does not have any effect.");
}
Serde<?> keySerde = this.keyValueSerdeResolver
.getInboundKeySerde(extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter));
LOG.info("Key Serde used for " + targetReferenceValue + ": " + keySerde.getClass().getName());
Serde<?> valueSerde = bindingServiceProperties.getConsumerProperties(inboundName).isUseNativeDecoding() ?
getValueSerde(inboundName, extendedConsumerProperties, ResolvableType.forMethodParameter(methodParameter)) : Serdes.ByteArray();
LOG.info("Value Serde used for " + targetReferenceValue + ": " + valueSerde.getClass().getName());
Topology.AutoOffsetReset autoOffsetReset = getAutoOffsetReset(inboundName, extendedConsumerProperties);
if (parameterType.isAssignableFrom(KStream.class)) {
KStream<?, ?> stream = getkStream(inboundName, spec,
@@ -304,13 +276,17 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
// wrap the proxy created during the initial target type binding
// with real object (KStream)
kStreamWrapper.wrap((KStream<Object, Object>) stream);
this.kafkaStreamsBindingInformationCatalogue.addKeySerde(stream, keySerde);
BindingProperties bindingProperties1 = this.kafkaStreamsBindingInformationCatalogue.getBindingProperties().get(kStreamWrapper);
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
if (streamListenerParameterAdapter.supports(stream.getClass(),
methodParameter)) {
arguments[parameterIndex] = streamListenerParameterAdapter
.adapt(kStreamWrapper, methodParameter);
.adapt(stream, methodParameter);
break;
}
}
@@ -323,39 +299,9 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
+ method + "from " + stream.getClass() + " to "
+ parameterType);
}
else if (parameterType.isAssignableFrom(KTable.class)) {
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
KTable<?, ?> table = getKTable(streamsBuilder, keySerde,
valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
KTableBoundElementFactory.KTableWrapper kTableWrapper = (KTableBoundElementFactory.KTableWrapper) targetBean;
// wrap the proxy created during the initial target type binding
// with real object (KTable)
kTableWrapper.wrap((KTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
}
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
String materializedAs = extendedConsumerProperties
.getMaterializedAs();
String bindingDestination = this.bindingServiceProperties
.getBindingDestination(inboundName);
GlobalKTable<?, ?> table = getGlobalKTable(streamsBuilder,
keySerde, valueSerde, materializedAs, bindingDestination,
autoOffsetReset);
// @checkstyle:off
GlobalKTableBoundElementFactory.GlobalKTableWrapper globalKTableWrapper = (GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
// @checkstyle:on
// wrap the proxy created during the initial target type binding
// with real object (KTable)
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
this.kafkaStreamsBindingInformationCatalogue
.addStreamBuilderFactory(streamsBuilderFactoryBean);
arguments[parameterIndex] = table;
else {
handleKTableGlobalKTableInputs(arguments, parameterIndex, inboundName, parameterType, targetBean, streamsBuilderFactoryBean,
streamsBuilder, extendedConsumerProperties, keySerde, valueSerde, autoOffsetReset);
}
}
catch (Exception ex) {
@@ -370,52 +316,6 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
return arguments;
}
private GlobalKTable<?, ?> getGlobalKTable(StreamsBuilder streamsBuilder,
Serde<?> keySerde, Serde<?> valueSerde, String materializedAs,
String bindingDestination, Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAsGlobalKTable(streamsBuilder, bindingDestination,
materializedAs, keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.globalTable(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private KTable<?, ?> getKTable(StreamsBuilder streamsBuilder, Serde<?> keySerde,
Serde<?> valueSerde, String materializedAs, String bindingDestination,
Topology.AutoOffsetReset autoOffsetReset) {
return materializedAs != null
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
keySerde, valueSerde, autoOffsetReset)
: streamsBuilder.table(bindingDestination,
Consumed.with(keySerde, valueSerde)
.withOffsetResetPolicy(autoOffsetReset));
}
private <K, V> KTable<K, V> materializedAs(StreamsBuilder streamsBuilder,
String destination, String storeName, Serde<K> k, Serde<V> v,
Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.table(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> GlobalKTable<K, V> materializedAsGlobalKTable(
StreamsBuilder streamsBuilder, String destination, String storeName,
Serde<K> k, Serde<V> v, Topology.AutoOffsetReset autoOffsetReset) {
return streamsBuilder.globalTable(
this.bindingServiceProperties.getBindingDestination(destination),
Consumed.with(k, v).withOffsetResetPolicy(autoOffsetReset),
getMaterialized(storeName, k, v));
}
private <K, V> Materialized<K, V, KeyValueStore<Bytes, byte[]>> getMaterialized(
String storeName, Serde<K> k, Serde<V> v) {
return Materialized.<K, V, KeyValueStore<Bytes, byte[]>>as(storeName)
.withKeySerde(k).withValueSerde(v);
}
private StoreBuilder buildStateStore(KafkaStreamsStateStoreProperties spec) {
try {
@@ -488,98 +388,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
+ ". Inbound message conversion done by Spring Cloud Stream.");
}
stream = stream.mapValues((value) -> {
Object returnValue;
String contentType = bindingProperties.getContentType();
if (value != null && !StringUtils.isEmpty(contentType) && !nativeDecoding) {
returnValue = MessageBuilder.withPayload(value)
.setHeader(MessageHeaders.CONTENT_TYPE, contentType).build();
}
else {
returnValue = value;
}
return returnValue;
});
return stream;
}
private void enableNativeDecodingForKTableAlways(Class<?> parameterType,
BindingProperties bindingProperties) {
if (parameterType.isAssignableFrom(KTable.class)
|| parameterType.isAssignableFrom(GlobalKTable.class)) {
if (bindingProperties.getConsumer() == null) {
bindingProperties.setConsumer(new ConsumerProperties());
}
// No framework level message conversion provided for KTable/GlobalKTable, its
// done by the broker.
bindingProperties.getConsumer().setUseNativeDecoding(true);
}
}
@SuppressWarnings({"unchecked"})
private void buildStreamsBuilderAndRetrieveConfig(Method method,
ApplicationContext applicationContext, String inboundName) {
ConfigurableListableBeanFactory beanFactory = this.applicationContext
.getBeanFactory();
Map<String, Object> streamConfigGlobalProperties = applicationContext
.getBean("streamConfigGlobalProperties", Map.class);
KafkaStreamsConsumerProperties extendedConsumerProperties = this.kafkaStreamsExtendedBindingProperties
.getExtendedConsumerProperties(inboundName);
streamConfigGlobalProperties
.putAll(extendedConsumerProperties.getConfiguration());
String applicationId = extendedConsumerProperties.getApplicationId();
// override application.id if set at the individual binding level.
if (StringUtils.hasText(applicationId)) {
streamConfigGlobalProperties.put(StreamsConfig.APPLICATION_ID_CONFIG,
applicationId);
}
int concurrency = this.bindingServiceProperties.getConsumerProperties(inboundName)
.getConcurrency();
// override concurrency if set at the individual binding level.
if (concurrency > 1) {
streamConfigGlobalProperties.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG,
concurrency);
}
Map<String, KafkaStreamsDlqDispatch> kafkaStreamsDlqDispatchers = applicationContext
.getBean("kafkaStreamsDlqDispatchers", Map.class);
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(
streamConfigGlobalProperties) {
@Override
public Properties asProperties() {
Properties properties = super.asProperties();
properties.put(SendToDlqAndContinue.KAFKA_STREAMS_DLQ_DISPATCHERS,
kafkaStreamsDlqDispatchers);
return properties;
}
};
StreamsBuilderFactoryBean streamsBuilder = this.cleanupConfig == null
? new StreamsBuilderFactoryBean(kafkaStreamsConfiguration)
: new StreamsBuilderFactoryBean(kafkaStreamsConfiguration,
this.cleanupConfig);
streamsBuilder.setAutoStartup(false);
BeanDefinition streamsBuilderBeanDefinition = BeanDefinitionBuilder
.genericBeanDefinition(
(Class<StreamsBuilderFactoryBean>) streamsBuilder.getClass(),
() -> streamsBuilder)
.getRawBeanDefinition();
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(
"stream-builder-" + method.getName(), streamsBuilderBeanDefinition);
StreamsBuilderFactoryBean streamsBuilderX = applicationContext.getBean(
"&stream-builder-" + method.getName(), StreamsBuilderFactoryBean.class);
this.methodStreamsBuilderFactoryBeanMap.put(method, streamsBuilderX);
}
@Override
public final void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
this.applicationContext = (ConfigurableApplicationContext) applicationContext;
return getkStream(bindingProperties, stream, nativeDecoding);
}
private void validateStreamListenerMethod(StreamListener streamListener,
@@ -630,8 +439,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator
&& this.applicationContext.containsBean(targetBeanName)) {
Class<?> targetBeanClass = this.applicationContext.getType(targetBeanName);
if (targetBeanClass != null) {
boolean supports = KStream.class.isAssignableFrom(targetBeanClass)
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
boolean supports = KafkaStreamsBinderUtils.supportsKStream(methodParameter, targetBeanClass);
if (!supports) {
supports = KTable.class.isAssignableFrom(targetBeanClass)
&& KTable.class.isAssignableFrom(methodParameter.getParameterType());

View File

@@ -18,15 +18,22 @@ package org.springframework.cloud.stream.binder.kafka.streams;
import java.util.Map;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.utils.Utils;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
import org.springframework.core.ResolvableType;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.util.StringUtils;
/**
@@ -53,7 +60,9 @@ import org.springframework.util.StringUtils;
* @author Soby Chacko
* @author Lei Chen
*/
class KeyValueSerdeResolver {
public class KeyValueSerdeResolver {
private static final Log LOG = LogFactory.getLog(KeyValueSerdeResolver.class);
private final Map<String, Object> streamConfigGlobalProperties;
@@ -78,6 +87,13 @@ class KeyValueSerdeResolver {
return getKeySerde(keySerdeString);
}
public Serde<?> getInboundKeySerde(
KafkaStreamsConsumerProperties extendedConsumerProperties, ResolvableType resolvableType) {
String keySerdeString = extendedConsumerProperties.getKeySerde();
return getKeySerde(keySerdeString, resolvableType);
}
/**
* Provide the {@link Serde} for inbound value.
* @param consumerProperties {@link ConsumerProperties} on binding
@@ -105,6 +121,27 @@ class KeyValueSerdeResolver {
return valueSerde;
}
public Serde<?> getInboundValueSerde(ConsumerProperties consumerProperties,
KafkaStreamsConsumerProperties extendedConsumerProperties,
ResolvableType resolvableType) {
Serde<?> valueSerde;
String valueSerdeString = extendedConsumerProperties.getValueSerde();
try {
if (consumerProperties != null && consumerProperties.isUseNativeDecoding()) {
valueSerde = getValueSerde(valueSerdeString, resolvableType);
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
/**
* Provide the {@link Serde} for outbound key.
* @param properties binding level extended {@link KafkaStreamsProducerProperties}
@@ -114,6 +151,11 @@ class KeyValueSerdeResolver {
return getKeySerde(properties.getKeySerde());
}
public Serde<?> getOuboundKeySerde(KafkaStreamsProducerProperties properties, ResolvableType resolvableType) {
return getKeySerde(properties.getKeySerde(), resolvableType);
}
/**
* Provide the {@link Serde} for outbound value.
* @param producerProperties {@link ProducerProperties} on binding
@@ -140,6 +182,25 @@ class KeyValueSerdeResolver {
return valueSerde;
}
public Serde<?> getOutboundValueSerde(ProducerProperties producerProperties,
KafkaStreamsProducerProperties kafkaStreamsProducerProperties, ResolvableType resolvableType) {
Serde<?> valueSerde;
try {
if (producerProperties.isUseNativeEncoding()) {
valueSerde = getValueSerde(
kafkaStreamsProducerProperties.getValueSerde(), resolvableType);
}
else {
valueSerde = Serdes.ByteArray();
}
valueSerde.configure(this.streamConfigGlobalProperties, false);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return valueSerde;
}
/**
* Provide the {@link Serde} for state store.
* @param keySerdeString serde class used for key
@@ -170,12 +231,7 @@ class KeyValueSerdeResolver {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
keySerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.key.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.key.serde"),
Serde.class)
: Serdes.ByteArray();
keySerde = getFallbackSerde("default.key.serde");
}
keySerde.configure(this.streamConfigGlobalProperties, true);
@@ -186,6 +242,75 @@ class KeyValueSerdeResolver {
return keySerde;
}
private Serde<?> getKeySerde(String keySerdeString, ResolvableType resolvableType) {
Serde<?> keySerde = null;
try {
if (StringUtils.hasText(keySerdeString)) {
keySerde = Utils.newInstance(keySerdeString, Serde.class);
}
else {
if (resolvableType != null &&
(isResolvalbeKafkaStreamsType(resolvableType) || isResolvableKStreamArrayType(resolvableType))) {
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(0) : resolvableType.getGeneric(0);
keySerde = getSerde(generic);
}
if (keySerde == null) {
keySerde = getFallbackSerde("default.key.serde");
}
}
keySerde.configure(this.streamConfigGlobalProperties, true);
}
catch (ClassNotFoundException ex) {
throw new IllegalStateException("Serde class not found: ", ex);
}
return keySerde;
}
private boolean isResolvableKStreamArrayType(ResolvableType resolvableType) {
return resolvableType.isArray() &&
KStream.class.isAssignableFrom(resolvableType.getComponentType().getRawClass());
}
private boolean isResolvalbeKafkaStreamsType(ResolvableType resolvableType) {
return resolvableType.getRawClass() != null && (KStream.class.isAssignableFrom(resolvableType.getRawClass()) || KTable.class.isAssignableFrom(resolvableType.getRawClass()) ||
GlobalKTable.class.isAssignableFrom(resolvableType.getRawClass()));
}
private Serde<?> getSerde(ResolvableType generic) {
Serde<?> serde = null;
if (generic.getRawClass() != null) {
if (Integer.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.Integer();
}
else if (Long.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.Long();
}
else if (Short.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.Short();
}
else if (Double.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.Double();
}
else if (Float.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.Float();
}
else if (byte[].class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.ByteArray();
}
else if (String.class.isAssignableFrom(generic.getRawClass())) {
serde = Serdes.String();
}
else {
// If the type is Object, then skip assigning the JsonSerde and let the fallback mechanism takes precedence.
if (!generic.getRawClass().isAssignableFrom((Object.class))) {
serde = new JsonSerde(generic.getRawClass());
}
}
}
return serde;
}
private Serde<?> getValueSerde(String valueSerdeString)
throws ClassNotFoundException {
Serde<?> valueSerde;
@@ -193,12 +318,39 @@ class KeyValueSerdeResolver {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
valueSerde = this.binderConfigurationProperties.getConfiguration()
.containsKey("default.value.serde")
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get("default.value.serde"),
Serde.class)
: Serdes.ByteArray();
valueSerde = getFallbackSerde("default.value.serde");
}
return valueSerde;
}
private Serde<?> getFallbackSerde(String s) throws ClassNotFoundException {
return this.binderConfigurationProperties.getConfiguration()
.containsKey(s)
? Utils.newInstance(this.binderConfigurationProperties
.getConfiguration().get(s),
Serde.class)
: Serdes.ByteArray();
}
@SuppressWarnings("unchecked")
private Serde<?> getValueSerde(String valueSerdeString, ResolvableType resolvableType)
throws ClassNotFoundException {
Serde<?> valueSerde = null;
if (StringUtils.hasText(valueSerdeString)) {
valueSerde = Utils.newInstance(valueSerdeString, Serde.class);
}
else {
if (resolvableType != null && ((isResolvalbeKafkaStreamsType(resolvableType)) ||
(isResolvableKStreamArrayType(resolvableType)))) {
ResolvableType generic = resolvableType.isArray() ? resolvableType.getComponentType().getGeneric(1) : resolvableType.getGeneric(1);
valueSerde = getSerde(generic);
}
if (valueSerde == null) {
valueSerde = getFallbackSerde("default.value.serde");
}
}
return valueSerde;
}

View File

@@ -1,66 +0,0 @@
/*
* Copyright 2018-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.errors.InvalidStateStoreException;
import org.apache.kafka.streams.state.QueryableStoreType;
/**
* Registry that contains {@link QueryableStoreType}s those created from the user
* applications.
*
* @author Soby Chacko
* @author Renwei Han
* @since 2.0.0
* @deprecated in favor of {@link InteractiveQueryService}
*/
public class QueryableStoreRegistry {
private final KafkaStreamsRegistry kafkaStreamsRegistry;
public QueryableStoreRegistry(KafkaStreamsRegistry kafkaStreamsRegistry) {
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
}
/**
* Retrieve and return a queryable store by name created in the application.
* @param storeName name of the queryable store
* @param storeType type of the queryable store
* @param <T> generic queryable store
* @return queryable store.
* @deprecated in favor of
* {@link InteractiveQueryService#getQueryableStore(String, QueryableStoreType)}
*/
public <T> T getQueryableStoreType(String storeName,
QueryableStoreType<T> storeType) {
for (KafkaStreams kafkaStream : this.kafkaStreamsRegistry.getKafkaStreams()) {
try {
T store = kafkaStream.store(storeName, storeType);
if (store != null) {
return store;
}
}
catch (InvalidStateStoreException ignored) {
// pass through
}
}
return null;
}
}

View File

@@ -25,7 +25,7 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
/**
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context and
* start them. As each one completes starting, register the associated KafkaStreams object
* into {@link QueryableStoreRegistry}.
* into {@link InteractiveQueryService}.
*
* This {@link SmartLifecycle} class ensures that the bean created from it is started very
* late through the bootstrap process by setting the phase value closer to

View File

@@ -0,0 +1,91 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
import org.springframework.context.annotation.ConditionContext;
import org.springframework.core.ResolvableType;
import org.springframework.core.type.AnnotatedTypeMetadata;
import org.springframework.util.ClassUtils;
/**
* Custom {@link org.springframework.context.annotation.Condition} that detects the presence
* of java.util.Function|Consumer beans. Used for Kafka Streams function support.
*
* @author Soby Chakco
* @since 2.2.0
*/
public class FunctionDetectorCondition extends SpringBootCondition {
@SuppressWarnings({ "unchecked", "rawtypes" })
@Override
public ConditionOutcome getMatchOutcome(ConditionContext context, AnnotatedTypeMetadata metadata) {
if (context != null && context.getBeanFactory() != null) {
Map functionTypes = context.getBeanFactory().getBeansOfType(Function.class);
functionTypes.putAll(context.getBeanFactory().getBeansOfType(Consumer.class));
functionTypes.putAll(context.getBeanFactory().getBeansOfType(BiFunction.class));
functionTypes.putAll(context.getBeanFactory().getBeansOfType(BiConsumer.class));
final Map<String, Object> kstreamFunctions = pruneFunctionBeansForKafkaStreams(functionTypes, context);
if (!kstreamFunctions.isEmpty()) {
return ConditionOutcome.match("Matched. Function/BiFunction/Consumer beans found");
}
else {
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
}
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
}
private static <T> Map<String, T> pruneFunctionBeansForKafkaStreams(Map<String, T> originalFunctionBeans,
ConditionContext context) {
final Map<String, T> prunedMap = new HashMap<>();
for (String key : originalFunctionBeans.keySet()) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
context.getBeanFactory().getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
prunedMap.put(key, originalFunctionBeans.get(key));
}
}
catch (NoSuchMethodException e) {
//ignore
}
}
return prunedMap;
}
}

View File

@@ -0,0 +1,248 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.cloud.stream.binding.AbstractBindableProxyFactory;
import org.springframework.cloud.stream.binding.BindableProxyFactory;
import org.springframework.cloud.stream.binding.BoundTargetHolder;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
/**
* Kafka Streams specific target bindings proxy factory. See {@link AbstractBindableProxyFactory} for more details.
*
* Targets bound by this factory:
*
* {@link KStream}
* {@link KTable}
* {@link GlobalKTable}
*
* This class looks at the Function bean's return signature as {@link ResolvableType} and introspect the individual types,
* binding them on the way.
*
* All types on the {@link ResolvableType} are bound except for KStream[] array types on the outbound, which will be
* deferred for binding at a later stage. The reason for doing that is because in this class, we don't have any way to know
* the actual size in the returned array. That has to wait until the function is invoked and we get a result.
*
* @author Soby Chacko
* @since 3.0.0
*/
public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFactory implements InitializingBean, BeanFactoryAware {
/**
* Default output binding name. Output binding may occur later on in the function invoker (outside of this class),
* thus making this field part of the API.
*/
public static final String DEFAULT_OUTPUT_SUFFIX = "out";
private static final String DEFAULT_INPUT_SUFFIX = "in";
private static Log log = LogFactory.getLog(BindableProxyFactory.class);
@Autowired
private StreamFunctionProperties streamFunctionProperties;
private final ResolvableType type;
private final String functionName;
private BeanFactory beanFactory;
public KafkaStreamsBindableProxyFactory(ResolvableType type, String functionName) {
super(type.getType().getClass());
this.type = type;
this.functionName = functionName;
}
@Override
public void afterPropertiesSet() {
Assert.notEmpty(KafkaStreamsBindableProxyFactory.this.bindingTargetFactories,
"'bindingTargetFactories' cannot be empty");
int resolvableTypeDepthCounter = 0;
ResolvableType argument = this.type.getGeneric(resolvableTypeDepthCounter++);
List<String> inputBindings = buildInputBindings();
Iterator<String> iterator = inputBindings.iterator();
String next = iterator.next();
bindInput(argument, next);
if (this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class))) {
argument = this.type.getGeneric(resolvableTypeDepthCounter++);
next = iterator.next();
bindInput(argument, next);
}
ResolvableType outboundArgument = this.type.getGeneric(resolvableTypeDepthCounter);
while (isAnotherFunctionOrConsumerFound(outboundArgument)) {
//The function is a curried function. We should introspect the partial function chain hierarchy.
argument = outboundArgument.getGeneric(0);
String next1 = iterator.next();
bindInput(argument, next1);
outboundArgument = outboundArgument.getGeneric(1);
}
//Introspect output for binding.
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
outboundArgument.getRawClass().isAssignableFrom(KStream.class))) {
// if the type is array, we need to do a late binding as we don't know the number of
// output bindings at this point in the flow.
List<String> outputBindings = streamFunctionProperties.getOutputBindings().get(this.functionName);
String outputBinding = null;
if (!CollectionUtils.isEmpty(outputBindings)) {
Iterator<String> outputBindingsIter = outputBindings.iterator();
if (outputBindingsIter.hasNext()) {
outputBinding = outputBindingsIter.next();
}
}
else {
outputBinding = String.format("%s_%s", this.functionName, DEFAULT_OUTPUT_SUFFIX);
}
Assert.isTrue(outputBinding != null, "output binding is not inferred.");
KafkaStreamsBindableProxyFactory.this.outputHolders.put(outputBinding,
new BoundTargetHolder(getBindingTargetFactory(KStream.class)
.createOutput(outputBinding), true));
String outputBinding1 = outputBinding;
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
rootBeanDefinition1.setInstanceSupplier(() -> outputHolders.get(outputBinding1).getBoundTarget());
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
registry.registerBeanDefinition(outputBinding1, rootBeanDefinition1);
}
}
private boolean isAnotherFunctionOrConsumerFound(ResolvableType arg1) {
return arg1 != null && !arg1.isArray() && arg1.getRawClass() != null &&
(arg1.getRawClass().isAssignableFrom(Function.class) || arg1.getRawClass().isAssignableFrom(Consumer.class));
}
/**
* If the application provides the property spring.cloud.stream.function.inputBindings.functionName,
* that gets precedence. Otherwise, use functionName-input or functionName-input-0, functionName-input-1 and so on
* for multiple inputs.
*
* @return an ordered collection of input bindings to use
*/
private List<String> buildInputBindings() {
List<String> inputs = new ArrayList<>();
List<String> inputBindings = streamFunctionProperties.getInputBindings().get(this.functionName);
if (!CollectionUtils.isEmpty(inputBindings)) {
inputs.addAll(inputBindings);
return inputs;
}
int numberOfInputs = this.type.getRawClass() != null &&
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
this.type.getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
if (numberOfInputs == 1) {
inputs.add(String.format("%s_%s", this.functionName, DEFAULT_INPUT_SUFFIX));
return inputs;
}
else {
int i = 0;
while (i < numberOfInputs) {
inputs.add(String.format("%s_%s_%d", this.functionName, DEFAULT_INPUT_SUFFIX, i++));
}
return inputs;
}
}
private int getNumberOfInputs() {
int numberOfInputs = 1;
ResolvableType arg1 = this.type.getGeneric(1);
while (isAnotherFunctionOrConsumerFound(arg1)) {
arg1 = arg1.getGeneric(1);
numberOfInputs++;
}
return numberOfInputs;
}
private void bindInput(ResolvableType arg0, String inputName) {
if (arg0.getRawClass() != null) {
KafkaStreamsBindableProxyFactory.this.inputHolders.put(inputName,
new BoundTargetHolder(getBindingTargetFactory(arg0.getRawClass())
.createInput(inputName), true));
}
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition();
rootBeanDefinition.setInstanceSupplier(() -> inputHolders.get(inputName).getBoundTarget());
registry.registerBeanDefinition(inputName, rootBeanDefinition);
}
@Override
public Set<String> getInputs() {
Set<String> ins = new LinkedHashSet<>();
this.inputHolders.forEach((s, BoundTargetHolder) -> ins.add(s));
return ins;
}
@Override
public Set<String> getOutputs() {
Set<String> outs = new LinkedHashSet<>();
this.outputHolders.forEach((s, BoundTargetHolder) -> outs.add(s));
return outs;
}
@Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
public void addOutputBinding(String output, Class<?> clazz) {
KafkaStreamsBindableProxyFactory.this.outputHolders.put(output,
new BoundTargetHolder(getBindingTargetFactory(clazz)
.createOutput(output), true));
}
public Map<String, BoundTargetHolder> getOutputHolders() {
return outputHolders;
}
}

View File

@@ -16,34 +16,57 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
import org.springframework.beans.factory.support.RootBeanDefinition;
import org.springframework.boot.autoconfigure.AutoConfigureBefore;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
import org.springframework.cloud.stream.config.BinderFactoryAutoConfiguration;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Conditional;
import org.springframework.context.annotation.Configuration;
/**
* @author Soby Chacko
* @since 2.2.0
*/
@Configuration
@ConditionalOnProperty("spring.cloud.stream.function.definition")
@EnableConfigurationProperties(StreamFunctionProperties.class)
@AutoConfigureBefore(BinderFactoryAutoConfiguration.class)
public class KafkaStreamsFunctionAutoConfiguration {
@Bean
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionProcessorInvoker kafkaStreamsFunctionProcessorInvoker(
KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
StreamFunctionProperties properties) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableType(),
properties.getDefinition(), kafkaStreamsFunctionProcessor);
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
kafkaStreamsFunctionProcessor);
}
@Bean
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor(
StreamFunctionProperties properties) {
return new KafkaStreamsFunctionBeanPostProcessor(properties);
@Conditional(FunctionDetectorCondition.class)
public KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor() {
return new KafkaStreamsFunctionBeanPostProcessor();
}
@Bean
@Conditional(FunctionDetectorCondition.class)
public BeanFactoryPostProcessor implicitFunctionKafkaStreamsBinder(KafkaStreamsFunctionBeanPostProcessor kafkaStreamsFunctionBeanPostProcessor) {
return beanFactory -> {
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
for (String s : kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes().keySet()) {
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition(
KafkaStreamsBindableProxyFactory.class);
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes().get(s));
rootBeanDefinition.getConstructorArgumentValues()
.addGenericArgumentValue(s);
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory", rootBeanDefinition);
}
};
}
}

View File

@@ -17,6 +17,13 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Method;
import java.util.Map;
import java.util.TreeMap;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.stream.Stream;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
@@ -24,40 +31,46 @@ import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.cloud.stream.function.StreamFunctionProperties;
import org.springframework.core.ResolvableType;
import org.springframework.util.ClassUtils;
/**
*
* @author Soby Chacko
* @since 2.1.0
* @since 2.2.0
*
*/
class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
private final StreamFunctionProperties kafkaStreamsFunctionProperties;
private ConfigurableListableBeanFactory beanFactory;
private ResolvableType resolvableType;
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties properties) {
this.kafkaStreamsFunctionProperties = properties;
}
public ResolvableType getResolvableType() {
return this.resolvableType;
public Map<String, ResolvableType> getResolvableTypes() {
return this.resolvableTypeMap;
}
@Override
public void afterPropertiesSet() throws Exception {
public void afterPropertiesSet() {
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
String[] biFunctionNames = this.beanFactory.getBeanNamesForType(BiFunction.class);
String[] consumerNames = this.beanFactory.getBeanNamesForType(Consumer.class);
String[] biConsumerNames = this.beanFactory.getBeanNamesForType(BiConsumer.class);
Stream.concat(
Stream.concat(Stream.of(functionNames), Stream.of(consumerNames)),
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)))
.forEach(this::extractResolvableTypes);
}
private void extractResolvableTypes(String key) {
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
this.beanFactory.getBeanDefinition(kafkaStreamsFunctionProperties.getDefinition()))
this.beanFactory.getBeanDefinition(key))
.getMetadata().getClassName(),
ClassUtils.getDefaultClassLoader());
try {
Method method = classObj.getMethod(this.kafkaStreamsFunctionProperties.getDefinition());
this.resolvableType = ResolvableType.forMethodReturnType(method, classObj);
Method method = classObj.getMethod(key);
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
resolvableTypeMap.put(key, resolvableType);
}
catch (NoSuchMethodException e) {
//ignore

View File

@@ -16,6 +16,8 @@
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import javax.annotation.PostConstruct;
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsFunctionProcessor;
@@ -26,21 +28,20 @@ import org.springframework.core.ResolvableType;
* @author Soby Chacko
* @since 2.1.0
*/
class KafkaStreamsFunctionProcessorInvoker {
public class KafkaStreamsFunctionProcessorInvoker {
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
private final ResolvableType resolvableType;
private final String functionName;
private final Map<String, ResolvableType> resolvableTypeMap;
KafkaStreamsFunctionProcessorInvoker(ResolvableType resolvableType, String functionName,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
public KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor) {
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
this.resolvableType = resolvableType;
this.functionName = functionName;
this.resolvableTypeMap = resolvableTypeMap;
}
@PostConstruct
void invoke() {
this.kafkaStreamsFunctionProcessor.orchestrateStreamListenerSetupMethod(resolvableType, functionName);
resolvableTypeMap.forEach((key, value) ->
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key));
}
}

View File

@@ -1,42 +0,0 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.lang.reflect.Type;
import org.apache.kafka.streams.kstream.GlobalKTable;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.springframework.cloud.function.context.WrapperDetector;
/**
* @author Soby Chacko
*/
public class KafkaStreamsFunctionWrapperDetector implements WrapperDetector {
@Override
public boolean isWrapper(Type type) {
if (type instanceof Class<?>) {
Class<?> cls = (Class<?>) type;
return KStream.class.isAssignableFrom(cls) ||
KTable.class.isAssignableFrom(cls) ||
GlobalKTable.class.isAssignableFrom(cls);
}
return false;
}
}

View File

@@ -25,9 +25,11 @@ import org.springframework.boot.context.properties.ConfigurationProperties;
* stream processing and one can provide window specific properties at runtime and use
* those properties in the applications using this class.
*
* @deprecated The properties exposed by this class can be used directly on Kafka Streams API in the application.
* @author Soby Chacko
*/
@ConfigurationProperties("spring.cloud.stream.kafka.streams")
@Deprecated
public class KafkaStreamsApplicationSupportProperties {
private TimeWindow timeWindow;

View File

@@ -0,0 +1,242 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;
import java.util.PriorityQueue;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.kafka.support.serializer.JsonSerde;
/**
* A convenient {@link Serde} for {@link java.util.Collection} implementations.
*
* Whenever a Kafka Stream application needs to collect data into a container object like
* {@link java.util.Collection}, then this Serde class can be used as a convenience for
* serialization needs. Some examples of where using this may handy is when the application
* needs to do aggregation or reduction operations where it needs to simply hold an
* {@link Iterable} type.
*
* By default, this Serde will use {@link JsonSerde} for serializing the inner objects.
* This can be changed by providing an explicit Serde during creation of this object.
*
* Here is an example of a possible use case:
*
* <pre class="code">
* .aggregate(ArrayList::new,
* (k, v, aggregates) -> {
* aggregates.add(v);
* return aggregates;
* },
* Materialized.&lt;String, Collection&lt;Foo&gt;, WindowStore&lt;Bytes, byte[]&gt;&gt;as(
* "foo-store")
* .withKeySerde(Serdes.String())
* .withValueSerde(new CollectionSerde<>(Foo.class, ArrayList.class)))
* * </pre>
*
* Supported Collection types by this Serde are - {@link java.util.ArrayList}, {@link java.util.LinkedList},
* {@link java.util.PriorityQueue} and {@link java.util.HashSet}. Deserializer will throw an exception
* if any other Collection types are used.
*
* @param <E> type of the underlying object that the collection holds
* @author Soby Chacko
* @since 3.0.0
*/
public class CollectionSerde<E> implements Serde<Collection<E>> {
/**
* Serde used for serializing the inner object.
*/
private final Serde<Collection<E>> inner;
/**
* Type of the collection class. This has to be a class that is
* implementing the {@link java.util.Collection} interface.
*/
private final Class<?> collectionClass;
/**
* Constructor to use when the application wants to specify the type
* of the Serde used for the inner object.
*
* @param serde specify an explicit Serde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Serde<E> serde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
this.inner =
Serdes.serdeFrom(
new CollectionSerializer<>(serde.serializer()),
new CollectionDeserializer<>(serde.deserializer(), collectionsClass));
}
/**
* Constructor to delegate serialization operations for the inner objects
* to {@link JsonSerde}.
*
* @param targetTypeForJsonSerde target type used by the JsonSerde
* @param collectionsClass type of the Collection class
*/
public CollectionSerde(Class<?> targetTypeForJsonSerde, Class<?> collectionsClass) {
this.collectionClass = collectionsClass;
JsonSerde<E> jsonSerde = new JsonSerde(targetTypeForJsonSerde);
this.inner = Serdes.serdeFrom(
new CollectionSerializer<>(jsonSerde.serializer()),
new CollectionDeserializer<>(jsonSerde.deserializer(), collectionsClass));
}
@Override
public Serializer<Collection<E>> serializer() {
return inner.serializer();
}
@Override
public Deserializer<Collection<E>> deserializer() {
return inner.deserializer();
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
inner.serializer().configure(configs, isKey);
inner.deserializer().configure(configs, isKey);
}
@Override
public void close() {
inner.serializer().close();
inner.deserializer().close();
}
private static class CollectionSerializer<E> implements Serializer<Collection<E>> {
private Serializer<E> inner;
CollectionSerializer(Serializer<E> inner) {
this.inner = inner;
}
CollectionSerializer() { }
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, Collection<E> collection) {
final int size = collection.size();
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
final DataOutputStream dos = new DataOutputStream(baos);
final Iterator<E> iterator = collection.iterator();
try {
dos.writeInt(size);
while (iterator.hasNext()) {
final byte[] bytes = inner.serialize(topic, iterator.next());
dos.writeInt(bytes.length);
dos.write(bytes);
}
}
catch (IOException e) {
throw new RuntimeException("Unable to serialize the provided collection", e);
}
return baos.toByteArray();
}
@Override
public void close() {
inner.close();
}
}
private static class CollectionDeserializer<E> implements Deserializer<Collection<E>> {
private final Deserializer<E> valueDeserializer;
private final Class<?> collectionClass;
CollectionDeserializer(final Deserializer<E> valueDeserializer, Class<?> collectionClass) {
this.valueDeserializer = valueDeserializer;
this.collectionClass = collectionClass;
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
}
@Override
public Collection<E> deserialize(String topic, byte[] bytes) {
if (bytes == null || bytes.length == 0) {
return null;
}
Collection<E> collection = getCollection();
final DataInputStream dataInputStream = new DataInputStream(new ByteArrayInputStream(bytes));
try {
final int records = dataInputStream.readInt();
for (int i = 0; i < records; i++) {
final byte[] valueBytes = new byte[dataInputStream.readInt()];
dataInputStream.read(valueBytes);
collection.add(valueDeserializer.deserialize(topic, valueBytes));
}
}
catch (IOException e) {
throw new RuntimeException("Unable to deserialize collection", e);
}
return collection;
}
@Override
public void close() {
}
private Collection<E> getCollection() {
Collection<E> collection;
if (this.collectionClass.isAssignableFrom(ArrayList.class)) {
collection = new ArrayList<>();
}
else if (this.collectionClass.isAssignableFrom(HashSet.class)) {
collection = new HashSet<>();
}
else if (this.collectionClass.isAssignableFrom(LinkedList.class)) {
collection = new LinkedList<>();
}
else if (this.collectionClass.isAssignableFrom(PriorityQueue.class)) {
collection = new PriorityQueue<>();
}
else {
throw new IllegalArgumentException("Unsupported collection type - " + this.collectionClass);
}
return collection;
}
}
}

View File

@@ -24,9 +24,9 @@ import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.Serde;
import org.apache.kafka.common.serialization.Serializer;
import org.springframework.cloud.stream.converter.CompositeMessageConverterFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.converter.CompositeMessageConverter;
import org.springframework.messaging.converter.MessageConverter;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
@@ -35,7 +35,7 @@ import org.springframework.util.MimeTypeUtils;
/**
* A {@link Serde} implementation that wraps the list of {@link MessageConverter}s from
* {@link CompositeMessageConverterFactory}.
* {@link CompositeMessageConverter}.
*
* The primary motivation for this class is to provide an avro based {@link Serde} that is
* compatible with the schema registry that Spring Cloud Stream provides. When using the
@@ -90,11 +90,11 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
private final CompositeNonNativeSerializer<T> compositeNonNativeSerializer;
public CompositeNonNativeSerde(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
CompositeMessageConverter compositeMessageConverter) {
this.compositeNonNativeDeserializer = new CompositeNonNativeDeserializer<>(
compositeMessageConverterFactory);
compositeMessageConverter);
this.compositeNonNativeSerializer = new CompositeNonNativeSerializer<>(
compositeMessageConverterFactory);
compositeMessageConverter);
}
@Override
@@ -150,9 +150,8 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
private Class<?> valueClass;
CompositeNonNativeDeserializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override
@@ -197,9 +196,8 @@ public class CompositeNonNativeSerde<T> implements Serde<T> {
private MimeType mimeType;
CompositeNonNativeSerializer(
CompositeMessageConverterFactory compositeMessageConverterFactory) {
this.messageConverter = compositeMessageConverterFactory
.getMessageConverterForAllRegistered();
CompositeMessageConverter compositeMessageConverter) {
this.messageConverter = compositeMessageConverter;
}
@Override

View File

@@ -3,7 +3,3 @@ org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsApplicationSupportAutoConfiguration,\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionAutoConfiguration
org.springframework.cloud.function.context.WrapperDetector=\
org.springframework.cloud.stream.binder.kafka.streams.function.KafkaStreamsFunctionWrapperDetector

View File

@@ -27,7 +27,7 @@ import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
/**
* @author Soby Chacko
@@ -35,7 +35,7 @@ import org.springframework.kafka.test.rule.KafkaEmbedded;
public class KafkaStreamsBinderBootstrapTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testKafkaStreamsBinderWithCustomEnvironmentCanStart() {
@@ -48,10 +48,7 @@ public class KafkaStreamsBinderBootstrapTest {
"--spring.cloud.stream.binders.kBind1.type=kstream",
"--spring.cloud.stream.binders.kBind1.environment"
+ ".spring.cloud.stream.kafka.streams.binder.brokers"
+ "=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.binders.kBind1.environment.spring"
+ ".cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ "=" + embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}
@@ -64,9 +61,7 @@ public class KafkaStreamsBinderBootstrapTest {
+ "=testKafkaStreamsBinderWithStandardConfigurationCanStart",
"--spring.cloud.stream.bindings.input.destination=foo",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString());
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
applicationContext.close();
}

View File

@@ -38,9 +38,6 @@ import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -85,14 +82,12 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.function.inputBindings.process=input",
"--spring.cloud.stream.function.outputBindings.process=output1,output2,output3",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
@@ -102,8 +97,7 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId" +
"=KafkaStreamsBinderWordCountBranchesFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString());
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString());
try {
receiveAndValidate(context);
}
@@ -183,7 +177,6 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
}
}
@EnableBinding(KStreamProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class WordCountProcessorApplication {
@@ -208,18 +201,4 @@ public class KafkaStreamsBinderWordCountBranchesFunctionTests {
}
}
interface KStreamProcessorX {
@Input("input")
KStream<?, ?> input();
@Output("output1")
KStream<?, ?> output1();
@Output("output2")
KStream<?, ?> output2();
@Output("output3")
KStream<?, ?> output3();
}
}

View File

@@ -33,14 +33,13 @@ import org.apache.kafka.streams.kstream.TimeWindows;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -64,7 +63,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
private static Consumer<String, String> consumer;
@BeforeClass
public static void setUp() throws Exception {
public static void setUp() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
@@ -79,24 +78,46 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
@Test
@Ignore
public void testKstreamWordCountFunction() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run(
"--spring.cloud.stream.function.inputBindings.process=input",
"--spring.cloud.stream.function.outputBindings.process=output",
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
@Test
public void testKstreamWordCountFunctionWithGeneratedApplicationId() throws Exception {
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run(
"--spring.cloud.stream.function.inputBindings.process=input",
"--spring.cloud.stream.function.outputBindings.process=output",
"--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
@@ -167,10 +188,9 @@ public class KafkaStreamsBinderWordCountFunctionTests {
}
}
@EnableBinding(KafkaStreamsProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
public static class WordCountProcessorApplication {
@Bean
public Function<KStream<Object, String>, KStream<?, WordCount>> process() {
@@ -186,5 +206,4 @@ public class KafkaStreamsBinderWordCountFunctionTests {
new Date(key.window().start()), new Date(key.window().end()))));
}
}
}

View File

@@ -0,0 +1,150 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.function;
import java.util.Map;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.processor.Processor;
import org.apache.kafka.streams.processor.ProcessorContext;
import org.apache.kafka.streams.processor.ProcessorSupplier;
import org.apache.kafka.streams.state.KeyValueStore;
import org.apache.kafka.streams.state.StoreBuilder;
import org.apache.kafka.streams.state.Stores;
import org.apache.kafka.streams.state.WindowStore;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import static org.assertj.core.api.Assertions.assertThat;
public class KafkaStreamsFunctionStateStoreTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testKafkaStreamsFuncionWithMultipleStateStores() throws Exception {
SpringApplication app = new SpringApplication(StateStoreTestApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process_in.destination=words",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
private void receiveAndValidate(ConfigurableApplicationContext context) throws Exception {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
try {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("words");
template.sendDefault("foobar");
Thread.sleep(2000L);
StateStoreTestApplication processorApplication = context
.getBean(StateStoreTestApplication.class);
KeyValueStore<Long, Long> state1 = processorApplication.state1;
assertThat(processorApplication.processed).isTrue();
assertThat(state1 != null).isTrue();
assertThat(state1.name()).isEqualTo("my-store");
WindowStore<Long, Long> state2 = processorApplication.state2;
assertThat(state2 != null).isTrue();
assertThat(state2.name()).isEqualTo("other-store");
assertThat(state2.persistent()).isTrue();
}
finally {
pf.destroy();
}
}
@EnableAutoConfiguration
public static class StateStoreTestApplication {
KeyValueStore<Long, Long> state1;
WindowStore<Long, Long> state2;
boolean processed;
@Bean
public java.util.function.Consumer<KStream<Object, String>> process() {
return input ->
input.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
@Override
@SuppressWarnings("unchecked")
public void init(ProcessorContext context) {
state1 = (KeyValueStore<Long, Long>) context.getStateStore("my-store");
state2 = (WindowStore<Long, Long>) context.getStateStore("other-store");
}
@Override
public void process(Object key, String value) {
processed = true;
}
@Override
public void close() {
if (state1 != null) {
state1.close();
}
if (state2 != null) {
state2.close();
}
}
}, "my-store", "other-store");
}
@Bean
public StoreBuilder myStore() {
return Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("my-store"), Serdes.Long(),
Serdes.Long());
}
@Bean
public StoreBuilder otherStore() {
return Stores.windowStoreBuilder(
Stores.persistentWindowStore("other-store",
3L, 3, 3L, false), Serdes.Long(),
Serdes.Long());
}
}
}

View File

@@ -39,9 +39,6 @@ import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -49,7 +46,7 @@ import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
@@ -72,49 +69,24 @@ public class StreamToGlobalKTableFunctionTests {
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-y.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams.function" +
".StreamToGlobalKTableFunctionTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.springframework.cloud.stream.binder.kafka.streams." +
"function.StreamToGlobalKTableFunctionTests$EnrichedOrderSerde",
"--spring.cloud.stream.function.inputBindings.process=order,customer,product",
"--spring.cloud.stream.function.outputBindings.process=enriched-order",
"--spring.cloud.stream.bindings.order.destination=orders",
"--spring.cloud.stream.bindings.customer.destination=customers",
"--spring.cloud.stream.bindings.product.destination=products",
"--spring.cloud.stream.bindings.enriched-order.destination=enriched-order",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=" +
"--spring.cloud.stream.kafka.streams.bindings.order.consumer.applicationId=" +
"StreamToGlobalKTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
Map<String, Object> senderPropsCustomer = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
CustomerSerde customerSerde = new CustomerSerde();
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
customerSerde.serializer().getClass());
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer =
new DefaultKafkaProducerFactory<>(senderPropsCustomer);
@@ -128,8 +100,7 @@ public class StreamToGlobalKTableFunctionTests {
Map<String, Object> senderPropsProduct = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
ProductSerde productSerde = new ProductSerde();
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, productSerde.serializer().getClass());
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Product> pfProduct =
new DefaultKafkaProducerFactory<>(senderPropsProduct);
@@ -144,8 +115,7 @@ public class StreamToGlobalKTableFunctionTests {
Map<String, Object> senderPropsOrder = KafkaTestUtils.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
OrderSerde orderSerde = new OrderSerde();
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, orderSerde.serializer().getClass());
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(senderPropsOrder);
KafkaTemplate<Long, Order> orderTemplate = new KafkaTemplate<>(pfOrder, true);
@@ -162,9 +132,8 @@ public class StreamToGlobalKTableFunctionTests {
embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
EnrichedOrderSerde enrichedOrderSerde = new EnrichedOrderSerde();
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
enrichedOrderSerde.deserializer().getClass());
JsonDeserializer.class);
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams." +
"function.StreamToGlobalKTableFunctionTests.EnrichedOrder");
@@ -205,16 +174,6 @@ public class StreamToGlobalKTableFunctionTests {
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@Input("input-x")
GlobalKTable<?, ?> inputX();
@Input("input-y")
GlobalKTable<?, ?> inputY();
}
@EnableBinding(CustomGlobalKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class OrderEnricherApplication {
@@ -339,15 +298,4 @@ public class StreamToGlobalKTableFunctionTests {
}
}
public static class OrderSerde extends JsonSerde<Order> {
}
public static class CustomerSerde extends JsonSerde<Customer> {
}
public static class ProductSerde extends JsonSerde<Product> {
}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {
}
}

View File

@@ -20,6 +20,10 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Function;
import org.apache.kafka.clients.consumer.Consumer;
@@ -44,9 +48,6 @@ import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
@@ -56,6 +57,7 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.util.Assert;
import static org.assertj.core.api.Assertions.assertThat;
@@ -68,7 +70,7 @@ public class StreamToTableJoinFunctionTests {
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
@Test
public void testStreamToTable() throws Exception {
public void testStreamToTable() {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
@@ -82,36 +84,112 @@ public class StreamToTableJoinFunctionTests {
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
runTest(app, consumer);
}
@Test
public void testStreamToTableBiFunction() {
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
runTest(app, consumer);
}
@Test
public void testStreamToTableBiConsumer() throws Exception {
SpringApplication app = new SpringApplication(BiConsumerApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class);
DefaultKafkaConsumerFactory<String, Long> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process1",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.bindings.process_in_0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process_in_1.destination=user-regions-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.applicationId" +
"--spring.cloud.stream.kafka.streams.bindings.process_in_0.consumer.applicationId" +
"=testStreamToTableBiConsumer",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
new KeyValue<>("alice", "asia")
);
Map<String, Object> senderProps1 = KafkaTestUtils.producerProps(embeddedKafka);
senderProps1.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps1.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> pf1 = new DefaultKafkaProducerFactory<>(senderProps1);
KafkaTemplate<String, String> template1 = new KafkaTemplate<>(pf1, true);
template1.setDefaultTopic("user-regions-1");
for (KeyValue<String, String> keyValue : userRegions) {
template1.sendDefault(keyValue.key, keyValue.value);
}
// Input 2: Clicks per user (multiple records allowed per user).
List<KeyValue<String, Long>> userClicks = Arrays.asList(
new KeyValue<>("alice", 13L)
);
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, LongSerializer.class);
DefaultKafkaProducerFactory<String, Long> pf = new DefaultKafkaProducerFactory<>(senderProps);
KafkaTemplate<String, Long> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("user-clicks-1");
for (KeyValue<String, Long> keyValue : userClicks) {
template.sendDefault(keyValue.key, keyValue.value);
}
Assert.isTrue(BiConsumerApplication.latch.await(10, TimeUnit.SECONDS), "Failed to receive message");
}
finally {
consumer.close();
}
}
private void runTest(SpringApplication app, Consumer<String, Long> consumer) {
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.process_in_0.destination=user-clicks-1",
"--spring.cloud.stream.bindings.process_in_1.destination=user-regions-1",
"--spring.cloud.stream.bindings.process_out.destination=output-topic-1",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.process_in_0.consumer.applicationId" +
"=StreamToTableJoinFunctionTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes=" + embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(
@@ -188,11 +266,11 @@ public class StreamToTableJoinFunctionTests {
@Test
public void testGlobalStartOffsetWithLatestAndIndividualBindingWthEarliest() throws Exception {
SpringApplication app = new SpringApplication(CountClicksPerRegionApplication.class);
SpringApplication app = new SpringApplication(BiFunctionCountClicksPerRegionApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
Consumer<String, Long> consumer;
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-2",
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-3",
"false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
@@ -229,29 +307,17 @@ public class StreamToTableJoinFunctionTests {
template.sendDefault(keyValue.key, keyValue.value);
}
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.function.definition=process1",
"--spring.cloud.stream.function.inputBindings.process=input-1,input-2",
"--spring.cloud.stream.bindings.input-1.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-2.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.bindings.process_out.destination=output-topic-2",
"--spring.cloud.stream.bindings.input-1.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-2.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde" +
"=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde" +
@@ -326,8 +392,17 @@ public class StreamToTableJoinFunctionTests {
assertThat(count).isEqualTo(expectedClicksPerRegion.size());
assertThat(actualClicksPerRegion).hasSameElementsAs(expectedClicksPerRegion);
//the following removal is a code smell. Check with Oleg to see why this is happening.
//culprit is BinderFactoryAutoConfiguration line 300 with the following code:
//if (StringUtils.hasText(name)) {
// ((StandardEnvironment) environment).getSystemProperties()
// .putIfAbsent("spring.cloud.stream.function.definition", name);
// }
context.getEnvironment().getSystemProperties()
.remove("spring.cloud.stream.function.definition");
}
finally {
consumer.close();
}
}
@@ -361,13 +436,12 @@ public class StreamToTableJoinFunctionTests {
}
@EnableBinding(KStreamKTableProcessor.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class CountClicksPerRegionApplication {
@Bean
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process1() {
public Function<KStream<String, Long>, Function<KTable<String, String>, KStream<String, Long>>> process() {
return userClicksStream -> (userRegionsTable -> (userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
@@ -375,36 +449,41 @@ public class StreamToTableJoinFunctionTests {
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce((firstClicks, secondClicks) -> firstClicks + secondClicks)
.reduce(Long::sum)
.toStream()));
}
}
interface KStreamKTableProcessor {
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-1")
KStream<?, ?> input1();
/**
* Input binding.
*
* @return {@link Input} binding for {@link KStream} type.
*/
@Input("input-2")
KTable<?, ?> input2();
/**
* Output binding.
*
* @return {@link Output} binding for {@link KStream} type.
*/
@Output("output")
KStream<?, ?> output();
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
public static class BiFunctionCountClicksPerRegionApplication {
@Bean
public BiFunction<KStream<String, Long>, KTable<String, String>, KStream<String, Long>> process() {
return (userClicksStream, userRegionsTable) -> (userClicksStream
.leftJoin(userRegionsTable, (clicks, region) -> new RegionWithClicks(region == null ?
"UNKNOWN" : region, clicks),
Joined.with(Serdes.String(), Serdes.Long(), null))
.map((user, regionWithClicks) -> new KeyValue<>(regionWithClicks.getRegion(),
regionWithClicks.getClicks()))
.groupByKey(Serialized.with(Serdes.String(), Serdes.Long()))
.reduce(Long::sum)
.toStream());
}
}
@EnableAutoConfiguration
public static class BiConsumerApplication {
static CountDownLatch latch = new CountDownLatch(2);
@Bean
public BiConsumer<KStream<String, Long>, KTable<String, String>> process() {
return (userClicksStream, userRegionsTable) -> {
userClicksStream.foreach((key, value) -> latch.countDown());
userRegionsTable.toStream().foreach((key, value) -> latch.countDown());
};
}
}
}

View File

@@ -106,12 +106,10 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq",
"spring.cloud.stream.bindings.input.group=group",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTests
@@ -149,13 +147,12 @@ public abstract class DeserializationErrorHandlerByKafkaTests {
// @checkstyle:off
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.bindings.input.destination=word1,word2",
"spring.cloud.stream.kafka.streams.default.consumer.applicationId=deser-kafka-dlq-multi-input",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=deser-kafka-dlq-multi-input",
//"spring.cloud.stream.kafka.streams.default.consumer.applicationId=deser-kafka-dlq-multi-input",
"spring.cloud.stream.bindings.input.group=groupx",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
"spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
// @checkstyle:on
public static class DeserializationByKafkaAndDlqTestsWithMultipleInputs

View File

@@ -85,7 +85,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
System.setProperty("server.port", "0");
System.setProperty("spring.jmx.enabled", "false");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foob", "false",
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("kafka-streams-dlq-tests", "false",
embeddedKafka);
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
@@ -101,17 +101,16 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
consumer.close();
}
@SpringBootTest(properties = { "spring.cloud.stream.bindings.input.destination=foos",
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=foos",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.bindings.output.producer.headerMode=raw",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTests",
@@ -121,13 +120,13 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("hello");
template.sendDefault(7, "hello");
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("foobar",
"false", embeddedKafka);
@@ -150,6 +149,8 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.bindings.input.destination=foos1,foos2",
"spring.cloud.stream.bindings.output.destination=counts-id",
"spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
@@ -157,8 +158,6 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"spring.cloud.stream.kafka.streams.binder.serdeError=sendToDlq",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id"
+ "=deserializationByBinderAndDlqTestsWithMultipleInputs",
@@ -168,7 +167,7 @@ public abstract class DeserializtionErrorHandlerByBinderTests {
@Test
@SuppressWarnings("unchecked")
public void test() throws Exception {
public void test() {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(
senderProps);

View File

@@ -207,8 +207,14 @@ public class KafkaStreamsBinderHealthIndicatorTests {
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
@@ -231,10 +237,24 @@ public class KafkaStreamsBinderHealthIndicatorTests {
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.output2.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.keySerde="
// + "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
//
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.output2.producer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsBinderHealthIndicatorTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "ApplicationHealthTest-xyz",
"--spring.cloud.stream.kafka.streams.bindings.input2.consumer.applicationId="
@@ -302,7 +322,7 @@ public class KafkaStreamsBinderHealthIndicatorTests {
}
static class Product {
public static class Product {
Integer id;

View File

@@ -108,7 +108,6 @@ public class KafkaStreamsBinderMultipleInputTopicsTest {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"

View File

@@ -18,10 +18,10 @@ package org.springframework.cloud.stream.binder.kafka.streams.integration;
import java.util.Map;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
@@ -63,7 +63,7 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
private static Consumer<Integer, String> consumer;
private static Consumer<Integer, Long> consumer;
@BeforeClass
public static void setUp() throws Exception {
@@ -72,7 +72,8 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
// consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
// Deserializer.class.getName());
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
DefaultKafkaConsumerFactory<Integer, String> cf = new DefaultKafkaConsumerFactory<>(
consumerProps.put("value.deserializer", LongDeserializer.class);
DefaultKafkaConsumerFactory<Integer, Long> cf = new DefaultKafkaConsumerFactory<>(
consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "counts-id");
@@ -96,8 +97,6 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde="
+ "org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde="
+ "org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId="
+ "KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
@@ -120,13 +119,11 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
template.setDefaultTopic("foos");
template.sendDefault("{\"id\":\"123\"}");
ConsumerRecord<Integer, String> cr = KafkaTestUtils.getSingleRecord(consumer,
ConsumerRecord<Integer, Long> cr = KafkaTestUtils.getSingleRecord(consumer,
"counts-id");
assertThat(cr.key()).isEqualTo(123);
ObjectMapper om = new ObjectMapper();
Long aLong = om.readValue(cr.value(), Long.class);
assertThat(aLong).isEqualTo(1L);
assertThat(cr.value()).isEqualTo(1L);
}
@EnableBinding(KafkaStreamsProcessor.class)
@@ -142,12 +139,14 @@ public class KafkaStreamsBinderPojoInputAndPrimitiveTypeOutputTests {
new JsonSerde<>(Product.class)))
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("id-count-store-x")).toStream()
.map((key, value) -> new KeyValue<>(key.key().id, value));
.map((key, value) -> {
return new KeyValue<>(key.key().id, value);
});
}
}
static class Product {
public static class Product {
Integer id;

View File

@@ -38,6 +38,7 @@ import org.apache.kafka.streams.state.ReadOnlyWindowStore;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
@@ -98,6 +99,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
}
@Test
@Ignore
public void testKstreamWordCountWithApplicationIdSpecifiedAtDefaultConsumer()
throws Exception {
SpringApplication app = new SpringApplication(
@@ -108,21 +110,16 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.default.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.bindings.output.producer.headerMode=raw",
"--spring.cloud.stream.bindings.input.consumer.headerMode=raw",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
"--spring.cloud.stream.kafka.binder.brokers="
+ embeddedKafka.getBrokersAsString())) {
receiveAndValidate(context);
}
}
@@ -138,13 +135,13 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
"--spring.cloud.stream.kafka.streams.timeWindow.length=5000",
"--spring.cloud.stream.kafka.streams.timeWindow.advanceBy=0",
"--spring.cloud.stream.bindings.input.consumer.concurrency=2",
@@ -155,7 +152,7 @@ public class KafkaStreamsBinderWordCountIntegrationTests {
receiveAndValidate(context);
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-WordCountProcessorApplication-process", StreamsBuilderFactoryBean.class);
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
ReadOnlyWindowStore<Object, Object> store = kafkaStreams
.store("foo-WordCounts", QueryableStoreTypes.windowStore());

View File

@@ -105,8 +105,7 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
}
@SpringBootTest(properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-abc" }, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public static class NativeEncodingDecodingEnabledTests
@@ -132,8 +131,11 @@ public abstract class KafkaStreamsNativeEncodingDecodingTests {
}
// @checkstyle:off
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = "spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-xyz")
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=NativeEncodingDecodingEnabledTests-xyz" })
// @checkstyle:on
public static class NativeEncodingDecodingDisabledTests
extends KafkaStreamsNativeEncodingDecodingTests {

View File

@@ -97,6 +97,12 @@ public class KafkaStreamsStateStoreIntegrationTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input1.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input1.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsStateStoreIntegrationTests.Product",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.valueSerde=org.springframework.kafka.support.serializer.JsonSerde",
// "--spring.cloud.stream.kafka.streams.bindings.input2.consumer.configuration.spring.json.value.default.type=" +
// "org.springframework.cloud.stream.binder.kafka.streams.integration.KafkaStreamsStateStoreIntegrationTests.Product",
"--spring.cloud.stream.kafka.streams.bindings.input1.consumer.applicationId"
+ "=KafkaStreamsStateStoreIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.binder.brokers="
@@ -148,7 +154,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener("input")
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(KStream<Object, Product> input) {
@@ -183,7 +189,7 @@ public class KafkaStreamsStateStoreIntegrationTests {
boolean processed;
@StreamListener
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000)
@KafkaStreamsStateStore(name = "mystate", type = KafkaStreamsStateStoreProperties.StoreType.WINDOW, lengthMs = 300000, retentionMs = 300000)
@SuppressWarnings({ "deprecation", "unchecked" })
public void process(@Input("input1")KStream<Object, Product> input, @Input("input2")KStream<Object, Product> input2) {

View File

@@ -97,8 +97,6 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$IntegerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId=ProductCountApplication-xyz",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
@@ -108,7 +106,7 @@ public class KafkastreamsBinderPojoInputStringOutputIntegrationTests {
receiveAndValidateFoo(context);
// Assertions on StreamBuilderFactoryBean
StreamsBuilderFactoryBean streamsBuilderFactoryBean = context
.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
.getBean("&stream-builder-ProductCountApplication-process", StreamsBuilderFactoryBean.class);
CleanupConfig cleanup = TestUtils.getPropertyValue(streamsBuilderFactoryBean,
"cleanupConfig", CleanupConfig.class);
assertThat(cleanup.cleanupOnStart()).isFalse();

View File

@@ -0,0 +1,105 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.integration;
import org.apache.kafka.streams.kstream.KStream;
import org.junit.ClassRule;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.stereotype.Component;
import static org.assertj.core.api.Assertions.assertThat;
public class MultiProcessorsWithSameNameTests {
@ClassRule
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
"counts");
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule
.getEmbeddedKafka();
@Test
public void testBinderStartsSuccessfullyWhenTwoProcessorsWithSameNamesArePresent() {
SpringApplication app = new SpringApplication(
MultiProcessorsWithSameNameTests.WordCountProcessorApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.input-2.destination=words",
"--spring.cloud.stream.bindings.output.destination=counts",
"--spring.cloud.stream.bindings.output.contentType=application/json",
"--spring.cloud.stream.kafka.streams.bindings.input-1.consumer.application-id=basic-word-count",
"--spring.cloud.stream.kafka.streams.bindings.input-2.consumer.application-id=basic-word-count-1",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
StreamsBuilderFactoryBean streamsBuilderFactoryBean1 = context
.getBean("&stream-builder-Foo-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean1).isNotNull();
StreamsBuilderFactoryBean streamsBuilderFactoryBean2 = context
.getBean("&stream-builder-Bar-process", StreamsBuilderFactoryBean.class);
assertThat(streamsBuilderFactoryBean2).isNotNull();
}
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
static class WordCountProcessorApplication {
@Component
static class Foo {
@StreamListener
public void process(@Input("input-1") KStream<Object, String> input) {
}
}
//Second class with a stub processor that has the same name as above ("process")
@Component
static class Bar {
@StreamListener
public void process(@Input("input-2") KStream<Object, String> input) {
}
}
}
interface KafkaStreamsProcessorX {
@Input("input-1")
KStream<?, ?> input1();
@Input("input-2")
KStream<?, ?> input2();
}
}

View File

@@ -32,6 +32,7 @@ import org.apache.kafka.streams.kstream.KStream;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.springframework.boot.SpringApplication;
@@ -96,12 +97,15 @@ public class PerRecordAvroContentTypeTests {
}
@Test
@Ignore
public void testPerRecordAvroConentTypeAndVerifySerialization() throws Exception {
SpringApplication app = new SpringApplication(SensorCountAvroApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=false",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=false",
"--spring.cloud.stream.bindings.input.destination=sensors",
"--spring.cloud.stream.bindings.output.destination=received-sensors",
"--spring.cloud.stream.bindings.output.contentType=application/avro",

View File

@@ -41,14 +41,20 @@ import org.springframework.boot.context.properties.EnableConfigurationProperties
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import org.springframework.kafka.support.serializer.JsonSerde;
import org.springframework.kafka.support.serializer.JsonSerializer;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
@@ -75,36 +81,12 @@ public class StreamToGlobalKTableJoinIntegrationTests {
SpringApplication app = new SpringApplication(
StreamToGlobalKTableJoinIntegrationTests.OrderEnricherApplication.class);
app.setWebApplicationType(WebApplicationType.NONE);
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=orders",
"--spring.cloud.stream.bindings.input-x.destination=customers",
"--spring.cloud.stream.bindings.input-y.destination=products",
"--spring.cloud.stream.bindings.output.destination=enriched-order",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-y.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration"
+ ".StreamToGlobalKTableJoinIntegrationTests$OrderSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests$CustomerSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams."
+ "integration.StreamToGlobalKTableJoinIntegrationTests$ProductSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.springframework.cloud.stream.binder.kafka.streams.integration"
+ ".StreamToGlobalKTableJoinIntegrationTests$EnrichedOrderSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
@@ -112,17 +94,50 @@ public class StreamToGlobalKTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToGlobalKTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.bindings.input-y.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
+ embeddedKafka.getZookeeperConnectionString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KStream, ? extends ConsumerProperties, ? extends ProducerProperties> kStreamBinder = binderFactory
.getBinder("kstream", KStream.class);
KafkaStreamsConsumerProperties input = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) kStreamBinder)
.getExtendedConsumerProperties("input");
String cleanupPolicy = input.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicy).isEqualTo("compact");
Binder<GlobalKTable, ? extends ConsumerProperties, ? extends ProducerProperties> globalKTableBinder = binderFactory
.getBinder("globalktable", GlobalKTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
KafkaStreamsConsumerProperties inputY = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) globalKTableBinder)
.getExtendedConsumerProperties("input-y");
String cleanupPolicyY = inputY.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyY).isEqualTo("compact");
Map<String, Object> senderPropsCustomer = KafkaTestUtils
.producerProps(embeddedKafka);
senderPropsCustomer.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
CustomerSerde customerSerde = new CustomerSerde();
senderPropsCustomer.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
customerSerde.serializer().getClass());
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Customer> pfCustomer = new DefaultKafkaProducerFactory<>(
senderPropsCustomer);
@@ -139,9 +154,8 @@ public class StreamToGlobalKTableJoinIntegrationTests {
.producerProps(embeddedKafka);
senderPropsProduct.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
ProductSerde productSerde = new ProductSerde();
senderPropsProduct.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
productSerde.serializer().getClass());
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Product> pfProduct = new DefaultKafkaProducerFactory<>(
senderPropsProduct);
@@ -159,9 +173,8 @@ public class StreamToGlobalKTableJoinIntegrationTests {
.producerProps(embeddedKafka);
senderPropsOrder.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
LongSerializer.class);
OrderSerde orderSerde = new OrderSerde();
senderPropsOrder.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
orderSerde.serializer().getClass());
JsonSerializer.class);
DefaultKafkaProducerFactory<Long, Order> pfOrder = new DefaultKafkaProducerFactory<>(
senderPropsOrder);
@@ -180,9 +193,8 @@ public class StreamToGlobalKTableJoinIntegrationTests {
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
LongDeserializer.class);
EnrichedOrderSerde enrichedOrderSerde = new EnrichedOrderSerde();
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
enrichedOrderSerde.deserializer().getClass());
JsonDeserializer.class);
consumerProps.put(JsonDeserializer.VALUE_DEFAULT_TYPE,
"org.springframework.cloud.stream.binder.kafka.streams.integration."
+ "StreamToGlobalKTableJoinIntegrationTests.EnrichedOrder");
@@ -227,7 +239,9 @@ public class StreamToGlobalKTableJoinIntegrationTests {
pfOrder.destroy();
consumer.close();
}
finally {
context.close();
}
}
interface CustomGlobalKTableProcessor extends KafkaStreamsProcessor {
@@ -371,21 +385,4 @@ public class StreamToGlobalKTableJoinIntegrationTests {
}
}
public static class OrderSerde extends JsonSerde<Order> {
}
public static class CustomerSerde extends JsonSerde<Customer> {
}
public static class ProductSerde extends JsonSerde<Product> {
}
public static class EnrichedOrderSerde extends JsonSerde<EnrichedOrder> {
}
}

View File

@@ -20,6 +20,7 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
@@ -32,6 +33,7 @@ import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.apache.kafka.streams.KeyValue;
import org.apache.kafka.streams.kstream.JoinWindows;
import org.apache.kafka.streams.kstream.Joined;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
@@ -46,9 +48,17 @@ import org.springframework.boot.context.properties.EnableConfigurationProperties
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.binder.Binder;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.ConsumerProperties;
import org.springframework.cloud.stream.binder.ExtendedPropertiesBinder;
import org.springframework.cloud.stream.binder.ProducerProperties;
import org.springframework.cloud.stream.binder.kafka.streams.annotations.KafkaStreamsProcessor;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsApplicationSupportProperties;
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.CleanupConfig;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
@@ -90,26 +100,11 @@ public class StreamToTableJoinIntegrationTests {
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, "output-topic-1");
try (ConfigurableApplicationContext ignored = app.run("--server.port=0",
ConfigurableApplicationContext context = app.run("--server.port=0",
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=user-clicks-1",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-1",
"--spring.cloud.stream.bindings.output.destination=output-topic-1",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
@@ -117,10 +112,25 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=10000",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.applicationId"
+ "=StreamToTableJoinIntegrationTests-abc",
"--spring.cloud.stream.kafka.streams.bindings.input-x.consumer.topic.properties.cleanup.policy=compact",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.cloud.stream.kafka.streams.binder.zkNodes="
+ embeddedKafka.getZookeeperConnectionString())) {
+ embeddedKafka.getZookeeperConnectionString());
try {
// Testing certain ancillary configuration of GlobalKTable around topics creation.
// See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/687
BinderFactory binderFactory = context.getBeanFactory()
.getBean(BinderFactory.class);
Binder<KTable, ? extends ConsumerProperties, ? extends ProducerProperties> ktableBinder = binderFactory
.getBinder("ktable", KTable.class);
KafkaStreamsConsumerProperties inputX = (KafkaStreamsConsumerProperties) ((ExtendedPropertiesBinder) ktableBinder)
.getExtendedConsumerProperties("input-x");
String cleanupPolicyX = inputX.getTopic().getProperties().get("cleanup.policy");
assertThat(cleanupPolicyX).isEqualTo("compact");
// Input 1: Region per user (multiple records allowed per user).
List<KeyValue<String, String>> userRegions = Arrays.asList(new KeyValue<>(
@@ -246,23 +256,8 @@ public class StreamToTableJoinIntegrationTests {
"--spring.cloud.stream.bindings.input.destination=user-clicks-2",
"--spring.cloud.stream.bindings.input-x.destination=user-regions-2",
"--spring.cloud.stream.bindings.output.destination=output-topic-2",
"--spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.input-x.consumer.useNativeDecoding=true",
"--spring.cloud.stream.bindings.output.producer.useNativeEncoding=true",
"--spring.cloud.stream.kafka.streams.binder.configuration.auto.offset.reset=latest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.startOffset=earliest",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.inputX.consumer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde"
+ "=org.apache.kafka.common.serialization.Serdes$LongSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde"
@@ -357,6 +352,20 @@ public class StreamToTableJoinIntegrationTests {
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/536
}
@Test
public void testTwoKStreamsCanBeJoined() {
SpringApplication app = new SpringApplication(
JoinProcessor.class);
app.setWebApplicationType(WebApplicationType.NONE);
app.run("--server.port=0",
"--spring.cloud.stream.kafka.streams.binder.brokers="
+ embeddedKafka.getBrokersAsString(),
"--spring.application.name=" +
"two-kstream-input-join-integ-test");
//All we are verifying is that this application didn't throw any errors.
//See this issue: https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/701
}
@EnableBinding(KafkaStreamsProcessorX.class)
@EnableAutoConfiguration
@EnableConfigurationProperties(KafkaStreamsApplicationSupportProperties.class)
@@ -380,6 +389,12 @@ public class StreamToTableJoinIntegrationTests {
.toStream();
}
//This forces the state stores to be cleaned up before running the test.
@Bean
public CleanupConfig cleanupConfig() {
return new CleanupConfig(true, false);
}
}
@EnableBinding(KafkaStreamsProcessorY.class)
@@ -436,4 +451,37 @@ public class StreamToTableJoinIntegrationTests {
}
interface BindingsForTwoKStreamJoinTest {
String INPUT_1 = "input_1";
String INPUT_2 = "input_2";
@Input(INPUT_1)
KStream<String, String> input_1();
@Input(INPUT_2)
KStream<String, String> input_2();
}
@EnableBinding(BindingsForTwoKStreamJoinTest.class)
@EnableAutoConfiguration
public static class JoinProcessor {
@StreamListener
public void testProcessor(
@Input(BindingsForTwoKStreamJoinTest.INPUT_1) KStream<String, String> input1Stream,
@Input(BindingsForTwoKStreamJoinTest.INPUT_2) KStream<String, String> input2Stream) {
input1Stream
.join(input2Stream,
(event1, event2) -> null,
JoinWindows.of(TimeUnit.MINUTES.toMillis(5)),
Joined.with(
Serdes.String(),
Serdes.String(),
Serdes.String()
)
);
}
}
}

View File

@@ -96,11 +96,8 @@ public class WordCountMultipleBranchesIntegrationTests {
"--spring.jmx.enabled=false",
"--spring.cloud.stream.bindings.input.destination=words",
"--spring.cloud.stream.bindings.output1.destination=counts",
"--spring.cloud.stream.bindings.output1.contentType=application/json",
"--spring.cloud.stream.bindings.output2.destination=foo",
"--spring.cloud.stream.bindings.output2.contentType=application/json",
"--spring.cloud.stream.bindings.output3.destination=bar",
"--spring.cloud.stream.bindings.output3.contentType=application/json",
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde"
+ "=org.apache.kafka.common.serialization.Serdes$StringSerde",

View File

@@ -0,0 +1,89 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.streams.serde;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import org.junit.Test;
import static org.assertj.core.api.Assertions.assertThat;
/**
*
* @author Soby Chacko
*/
public class CollectionSerdeTest {
@Test
public void testCollectionsSerde() {
Foo foo1 = new Foo();
foo1.setData("data-1");
foo1.setNum(1);
Foo foo2 = new Foo();
foo2.setData("data-2");
foo2.setNum(2);
List<Foo> foos = new ArrayList<>();
foos.add(foo1);
foos.add(foo2);
CollectionSerde<Foo> collectionSerde = new CollectionSerde<>(Foo.class, ArrayList.class);
byte[] serialized = collectionSerde.serializer().serialize("", foos);
Collection<Foo> deserialized = collectionSerde.deserializer().deserialize("", serialized);
Iterator<Foo> iterator = deserialized.iterator();
Foo foo1Retrieved = iterator.next();
assertThat(foo1Retrieved.getData()).isEqualTo("data-1");
assertThat(foo1Retrieved.getNum()).isEqualTo(1);
Foo foo2Retrieved = iterator.next();
assertThat(foo2Retrieved.getData()).isEqualTo("data-2");
assertThat(foo2Retrieved.getNum()).isEqualTo(2);
}
static class Foo {
private int num;
private String data;
Foo() {
}
public int getNum() {
return num;
}
public void setNum(int num) {
this.num = num;
}
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
}
}

View File

@@ -55,7 +55,7 @@ public class CompositeNonNativeSerdeTest {
CompositeMessageConverterFactory compositeMessageConverterFactory = new CompositeMessageConverterFactory(
messageConverters, new ObjectMapper());
CompositeNonNativeSerde compositeNonNativeSerde = new CompositeNonNativeSerde(
compositeMessageConverterFactory);
compositeMessageConverterFactory.getMessageConverterForAllRegistered());
Map<String, Object> configs = new HashMap<>();
configs.put("valueClass", Sensor.class);

View File

@@ -4,6 +4,7 @@
<pattern>%d{ISO8601} %5p %t %c{2}:%L - %m%n</pattern>
</encoder>
</appender>
<!-- <logger name="org.apache.kafka" level="DEBUG"/>-->
<logger name="org.springframework.integration.kafka" level="INFO"/>
<logger name="org.springframework.kafka" level="INFO"/>
<logger name="org.springframework.cloud.stream" level="INFO" />

View File

@@ -10,7 +10,7 @@
<parent>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
<version>2.2.0.M1</version>
<version>3.0.0.M3</version>
</parent>
<dependencies>
@@ -69,13 +69,13 @@
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<artifactId>kafka_2.12</artifactId>
<version>${kafka.version}</version>
<classifier>test</classifier>
<scope>test</scope>

View File

@@ -1,387 +0,0 @@
/*
* Copyright 2017-2018 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.deser.std.StdNodeBasedDeserializer;
import com.fasterxml.jackson.databind.module.SimpleModule;
import com.fasterxml.jackson.databind.type.TypeFactory;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.springframework.kafka.support.AbstractKafkaHeaderMapper;
import org.springframework.lang.Nullable;
import org.springframework.messaging.MessageHeaders;
import org.springframework.util.Assert;
import org.springframework.util.ClassUtils;
import org.springframework.util.MimeType;
/**
* Default header mapper for Apache Kafka. Most headers in {@link KafkaHeaders} are not
* mapped on outbound messages. The exceptions are correlation and reply headers for
* request/reply messaging. Header types are added to a special header
* {@link #JSON_TYPES}.
*
* @author Gary Russell
* @author Artem Bilan
* @since 2.0
* @deprecated will be removed in the next point release after 2.1.0. See issue
* https://github.com/spring-cloud/spring-cloud-stream-binder-kafka/issues/509
*/
public class BinderHeaderMapper extends AbstractKafkaHeaderMapper {
private static final List<String> DEFAULT_TRUSTED_PACKAGES = Arrays
.asList("java.util", "java.lang", "org.springframework.util");
private static final List<String> DEFAULT_TO_STRING_CLASSES = Arrays.asList(
"org.springframework.util.MimeType", "org.springframework.http.MediaType");
/**
* Header name for java types of other headers.
*/
public static final String JSON_TYPES = "spring_json_header_types";
private final ObjectMapper objectMapper;
private final Set<String> trustedPackages = new LinkedHashSet<>(
DEFAULT_TRUSTED_PACKAGES);
private final Set<String> toStringClasses = new LinkedHashSet<>(
DEFAULT_TO_STRING_CLASSES);
/**
* Construct an instance with the default object mapper and default header patterns
* for outbound headers; all inbound headers are mapped. The default pattern list is
* {@code "!id", "!timestamp" and "*"}. In addition, most of the headers in
* {@link KafkaHeaders} are never mapped as headers since they represent data in
* consumer/producer records.
* @see #BinderHeaderMapper(ObjectMapper)
*/
public BinderHeaderMapper() {
this(new ObjectMapper());
}
/**
* Construct an instance with the provided object mapper and default header patterns
* for outbound headers; all inbound headers are mapped. The patterns are applied in
* order, stopping on the first match (positive or negative). Patterns are negated by
* preceding them with "!". The default pattern list is
* {@code "!id", "!timestamp" and "*"}. In addition, most of the headers in
* {@link KafkaHeaders} are never mapped as headers since they represent data in
* consumer/producer records.
* @param objectMapper the object mapper.
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
*/
public BinderHeaderMapper(ObjectMapper objectMapper) {
this(objectMapper, "!" + MessageHeaders.ID, "!" + MessageHeaders.TIMESTAMP, "*");
}
/**
* Construct an instance with a default object mapper and the provided header patterns
* for outbound headers; all inbound headers are mapped. The patterns are applied in
* order, stopping on the first match (positive or negative). Patterns are negated by
* preceding them with "!". The patterns will replace the default patterns; you
* generally should not map the {@code "id" and "timestamp"} headers. Note: most of
* the headers in {@link KafkaHeaders} are ever mapped as headers since they represent
* data in consumer/producer records.
* @param patterns the patterns.
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
*/
public BinderHeaderMapper(String... patterns) {
this(new ObjectMapper(), patterns);
}
/**
* Construct an instance with the provided object mapper and the provided header
* patterns for outbound headers; all inbound headers are mapped. The patterns are
* applied in order, stopping on the first match (positive or negative). Patterns are
* negated by preceding them with "!". The patterns will replace the default patterns;
* you generally should not map the {@code "id" and "timestamp"} headers. Note: most
* of the headers in {@link KafkaHeaders} are never mapped as headers since they
* represent data in consumer/producer records.
* @param objectMapper the object mapper.
* @param patterns the patterns.
* @see org.springframework.util.PatternMatchUtils#simpleMatch(String, String)
*/
public BinderHeaderMapper(ObjectMapper objectMapper, String... patterns) {
super(patterns);
Assert.notNull(objectMapper, "'objectMapper' must not be null");
Assert.noNullElements(patterns, "'patterns' must not have null elements");
this.objectMapper = objectMapper;
this.objectMapper.registerModule(new SimpleModule()
.addDeserializer(MimeType.class, new MimeTypeJsonDeserializer()));
}
/**
* Return the object mapper.
* @return the mapper.
*/
protected ObjectMapper getObjectMapper() {
return this.objectMapper;
}
/**
* Provide direct access to the trusted packages set for subclasses.
* @return the trusted packages.
* @since 2.2
*/
protected Set<String> getTrustedPackages() {
return this.trustedPackages;
}
/**
* Provide direct access to the toString() classes by subclasses.
* @return the toString() classes.
* @since 2.2
*/
protected Set<String> getToStringClasses() {
return this.toStringClasses;
}
/**
* Add packages to the trusted packages list (default {@code java.util, java.lang})
* used when constructing objects from JSON. If any of the supplied packages is
* {@code "*"}, all packages are trusted. If a class for a non-trusted package is
* encountered, the header is returned to the application with value of type
* {@link NonTrustedHeaderType}.
* @param trustedPackages the packages to trust.
*/
public void addTrustedPackages(String... trustedPackages) {
if (trustedPackages != null) {
for (String whiteList : trustedPackages) {
if ("*".equals(whiteList)) {
this.trustedPackages.clear();
break;
}
else {
this.trustedPackages.add(whiteList);
}
}
}
}
/**
* Add class names that the outbound mapper should perform toString() operations on
* before mapping.
* @param classNames the class names.
* @since 2.2
*/
public void addToStringClasses(String... classNames) {
this.toStringClasses.addAll(Arrays.asList(classNames));
}
@Override
public void fromHeaders(MessageHeaders headers, Headers target) {
final Map<String, String> jsonHeaders = new HashMap<>();
headers.forEach((k, v) -> {
if (matches(k, v)) {
if (v instanceof byte[]) {
target.add(new RecordHeader(k, (byte[]) v));
}
else {
try {
Object value = v;
String className = v.getClass().getName();
if (this.toStringClasses.contains(className)) {
value = v.toString();
className = "java.lang.String";
}
target.add(new RecordHeader(k,
getObjectMapper().writeValueAsBytes(value)));
jsonHeaders.put(k, className);
}
catch (Exception e) {
if (logger.isDebugEnabled()) {
logger.debug("Could not map " + k + " with type "
+ v.getClass().getName());
}
}
}
}
});
if (jsonHeaders.size() > 0) {
try {
target.add(new RecordHeader(JSON_TYPES,
getObjectMapper().writeValueAsBytes(jsonHeaders)));
}
catch (IllegalStateException | JsonProcessingException e) {
logger.error("Could not add json types header", e);
}
}
}
@Override
public void toHeaders(Headers source, final Map<String, Object> headers) {
final Map<String, String> jsonTypes = decodeJsonTypes(source);
source.forEach(h -> {
if (!(h.key().equals(JSON_TYPES))) {
if (jsonTypes != null && jsonTypes.containsKey(h.key())) {
Class<?> type = Object.class;
String requestedType = jsonTypes.get(h.key());
boolean trusted = false;
try {
trusted = trusted(requestedType);
if (trusted) {
type = ClassUtils.forName(requestedType, null);
}
}
catch (Exception e) {
logger.error("Could not load class for header: " + h.key(), e);
}
if (trusted) {
try {
headers.put(h.key(),
getObjectMapper().readValue(h.value(), type));
}
catch (IOException e) {
logger.error("Could not decode json type: "
+ new String(h.value()) + " for key: " + h.key(), e);
headers.put(h.key(), h.value());
}
}
else {
headers.put(h.key(),
new NonTrustedHeaderType(h.value(), requestedType));
}
}
else {
headers.put(h.key(), h.value());
}
}
});
}
@SuppressWarnings("unchecked")
@Nullable
private Map<String, String> decodeJsonTypes(Headers source) {
Map<String, String> types = null;
Iterator<Header> iterator = source.iterator();
while (iterator.hasNext()) {
Header next = iterator.next();
if (next.key().equals(JSON_TYPES)) {
try {
types = getObjectMapper().readValue(next.value(), Map.class);
}
catch (IOException e) {
logger.error(
"Could not decode json types: " + new String(next.value()),
e);
}
break;
}
}
return types;
}
protected boolean trusted(String requestedType) {
if (!this.trustedPackages.isEmpty()) {
int lastDot = requestedType.lastIndexOf('.');
if (lastDot < 0) {
return false;
}
String packageName = requestedType.substring(0, lastDot);
for (String trustedPackage : this.trustedPackages) {
if (packageName.equals(trustedPackage)
|| packageName.startsWith(trustedPackage + ".")) {
return true;
}
}
return false;
}
return true;
}
/**
* The {@link StdNodeBasedDeserializer} extension for {@link MimeType}
* deserialization. It is presented here for backward compatibility when older
* producers send {@link MimeType} headers as serialization version.
*/
private class MimeTypeJsonDeserializer extends StdNodeBasedDeserializer<MimeType> {
private static final long serialVersionUID = 1L;
MimeTypeJsonDeserializer() {
super(MimeType.class);
}
@Override
public MimeType convert(JsonNode root, DeserializationContext ctxt)
throws IOException {
JsonNode type = root.get("type");
JsonNode subType = root.get("subtype");
JsonNode parameters = root.get("parameters");
Map<String, String> params = BinderHeaderMapper.this.objectMapper
.readValue(parameters.traverse(), TypeFactory.defaultInstance()
.constructMapType(HashMap.class, String.class, String.class));
return new MimeType(type.asText(), subType.asText(), params);
}
}
/**
* Represents a header that could not be decoded due to an untrusted type.
*/
public static class NonTrustedHeaderType {
private final byte[] headerValue;
private final String untrustedType;
NonTrustedHeaderType(byte[] headerValue, String untrustedType) { // NOSONAR
this.headerValue = headerValue; // NOSONAR
this.untrustedType = untrustedType;
}
public byte[] getHeaderValue() {
return this.headerValue; // NOSONAR
}
public String getUntrustedType() {
return this.untrustedType;
}
@Override
public String toString() {
try {
return "NonTrustedHeaderType [headerValue="
+ new String(this.headerValue, StandardCharsets.UTF_8)
+ ", untrustedType=" + this.untrustedType + "]";
}
catch (Exception e) {
return "NonTrustedHeaderType [headerValue="
+ Arrays.toString(this.headerValue) + ", untrustedType="
+ this.untrustedType + "]";
}
}
}
}

View File

@@ -30,9 +30,11 @@ import java.util.concurrent.TimeoutException;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.common.PartitionInfo;
import org.springframework.beans.factory.DisposableBean;
import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.scheduling.concurrent.CustomizableThreadFactory;
/**
* Health indicator for Kafka.
@@ -43,11 +45,15 @@ import org.springframework.kafka.core.ConsumerFactory;
* @author Gary Russell
* @author Laur Aliste
* @author Soby Chacko
* @author Vladislav Fefelov
*/
public class KafkaBinderHealthIndicator implements HealthIndicator {
public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBean {
private static final int DEFAULT_TIMEOUT = 60;
private final ExecutorService executor = Executors.newSingleThreadExecutor(
new CustomizableThreadFactory("kafka-binder-health-"));
private final KafkaMessageChannelBinder binder;
private final ConsumerFactory<?, ?> consumerFactory;
@@ -72,8 +78,7 @@ public class KafkaBinderHealthIndicator implements HealthIndicator {
@Override
public Health health() {
ExecutorService exec = Executors.newSingleThreadExecutor();
Future<Health> future = exec.submit(this::buildHealthStatus);
Future<Health> future = executor.submit(this::buildHealthStatus);
try {
return future.get(this.timeout, TimeUnit.SECONDS);
}
@@ -91,9 +96,6 @@ public class KafkaBinderHealthIndicator implements HealthIndicator {
return Health.down().withDetail("Failed to retrieve partition information in",
this.timeout + " seconds").build();
}
finally {
exec.shutdownNow();
}
}
private Health buildHealthStatus() {
@@ -146,4 +148,9 @@ public class KafkaBinderHealthIndicator implements HealthIndicator {
}
}
@Override
public void destroy() throws Exception {
executor.shutdown();
}
}

View File

@@ -55,11 +55,16 @@ public interface KafkaBindingRebalanceListener {
/**
* Invoked when partitions are initially assigned or after a rebalance. Applications
* might only want to perform seek operations on an initial assignment.
* might only want to perform seek operations on an initial assignment. While the
* 'initial' argument is true for each thread (when concurrency is greater than 1),
* implementations should keep track of exactly which partitions have been sought.
* There is a race in that a rebalance could occur during startup and so a topic/
* partition that has been sought on one thread may be re-assigned to another
* thread and you may not wish to re-seek it at that time.
* @param bindingName the name of the binding.
* @param consumer the consumer.
* @param partitions the partitions.
* @param initial true if this is the initial assignment.
* @param initial true if this is the initial assignment on the current thread.
*/
default void onPartitionsAssigned(String bindingName, Consumer<?, ?> consumer,
Collection<TopicPartition> partitions, boolean initial) {

View File

@@ -0,0 +1,68 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka;
import org.springframework.expression.EvaluationContext;
import org.springframework.expression.Expression;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.ChannelInterceptor;
import org.springframework.util.Assert;
/**
* Interceptor to evaluate expressions for outbound messages before serialization.
*
* @author Gary Russell
* @since 3.0
*
*/
public class KafkaExpressionEvaluatingInterceptor implements ChannelInterceptor {
/**
* Name for the evaluated message key header.
*/
public static final String MESSAGE_KEY_HEADER = "scst_messageKey";
private final Expression messageKeyExpression;
private final EvaluationContext evaluationContext;
/**
* Construct an instance with the provided expressions and evaluation context. At
* least one expression muse be non-null.
* @param messageKeyExpression the routing key expression.
* @param evaluationContext the evaluation context.
*/
public KafkaExpressionEvaluatingInterceptor(Expression messageKeyExpression, EvaluationContext evaluationContext) {
Assert.notNull(messageKeyExpression != null, "A message key expression is required");
Assert.notNull(evaluationContext, "the 'evaluationContext' cannot be null");
this.messageKeyExpression = messageKeyExpression;
this.evaluationContext = evaluationContext;
}
@Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
MessageBuilder<?> builder = MessageBuilder.fromMessage(message);
if (this.messageKeyExpression != null) {
builder.setHeader(MESSAGE_KEY_HEADER,
this.messageKeyExpression.getValue(this.evaluationContext, message));
}
return builder.build();
}
}

View File

@@ -28,9 +28,9 @@ import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;
import java.util.function.Predicate;
import java.util.regex.Pattern;
@@ -75,11 +75,12 @@ import org.springframework.cloud.stream.config.MessageSourceCustomizer;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.cloud.stream.provisioning.ProducerDestination;
import org.springframework.context.Lifecycle;
import org.springframework.expression.Expression;
import org.springframework.expression.common.LiteralExpression;
import org.springframework.expression.spel.standard.SpelExpressionParser;
import org.springframework.integration.StaticMessageHeaderAccessor;
import org.springframework.integration.acks.AcknowledgmentCallback;
import org.springframework.integration.channel.ChannelInterceptorAware;
import org.springframework.integration.channel.AbstractMessageChannel;
import org.springframework.integration.core.MessageProducer;
import org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter;
import org.springframework.integration.kafka.inbound.KafkaMessageSource;
@@ -96,20 +97,23 @@ import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareRebalanceListener;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.support.DefaultKafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaderMapper;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.ProducerListener;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionInitialOffset;
import org.springframework.kafka.support.TopicPartitionInitialOffset.SeekPosition;
import org.springframework.kafka.support.TopicPartitionOffset;
import org.springframework.kafka.support.TopicPartitionOffset.SeekPosition;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.transaction.KafkaTransactionManager;
import org.springframework.lang.Nullable;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.MessageHandler;
import org.springframework.messaging.MessageHeaders;
import org.springframework.messaging.MessagingException;
import org.springframework.messaging.support.ChannelInterceptor;
import org.springframework.messaging.support.ErrorMessage;
import org.springframework.messaging.support.InterceptableChannel;
import org.springframework.util.Assert;
import org.springframework.util.CollectionUtils;
import org.springframework.util.ObjectUtils;
@@ -180,6 +184,10 @@ public class KafkaMessageChannelBinder extends
private static final ThreadLocal<String> bindingNameHolder = new ThreadLocal<>();
private static final Pattern interceptorNeededPattern = Pattern.compile("(payload|#root|#this)");
private static final SpelExpressionParser PARSER = new SpelExpressionParser();
private final KafkaBinderConfigurationProperties configurationProperties;
private final Map<String, TopicInformation> topicsInUse = new ConcurrentHashMap<>();
@@ -284,6 +292,17 @@ public class KafkaMessageChannelBinder extends
return this.extendedBindingProperties.getExtendedPropertiesEntryClass();
}
/**
* Return a reference to the binder's transaction manager's producer factory (if
* configured). Use this to create a transaction manager in a bean definition when you
* wish to use producer-only transactions.
* @return the transaction manager, or null.
*/
@Nullable
public ProducerFactory<byte[], byte[]> getTransactionalProducerFactory() {
return this.transactionManager == null ? null : this.transactionManager.getProducerFactory();
}
@Override
protected MessageHandler createProducerMessageHandler(
final ProducerDestination destination,
@@ -314,7 +333,9 @@ public class KafkaMessageChannelBinder extends
List<PartitionInfo> partitionsFor = producer
.partitionsFor(destination.getName());
producer.close();
((DisposableBean) producerFB).destroy();
if (this.transactionManager == null) {
((DisposableBean) producerFB).destroy();
}
return partitionsFor;
}, destination.getName());
this.topicsInUse.put(destination.getName(),
@@ -329,8 +350,8 @@ public class KafkaMessageChannelBinder extends
+ partitions.size()
+ " for the topic. The larger number will be used instead.");
}
List<ChannelInterceptor> interceptors = ((ChannelInterceptorAware) channel)
.getChannelInterceptors();
List<ChannelInterceptor> interceptors = ((InterceptableChannel) channel)
.getInterceptors();
interceptors.forEach((interceptor) -> {
if (interceptor instanceof PartitioningInterceptor) {
((PartitioningInterceptor) interceptor)
@@ -343,11 +364,17 @@ public class KafkaMessageChannelBinder extends
if (this.producerListener != null) {
kafkaTemplate.setProducerListener(this.producerListener);
}
if (this.transactionManager != null) {
kafkaTemplate.setTransactionIdPrefix(configurationProperties.getTransaction().getTransactionIdPrefix());
}
ProducerConfigurationMessageHandler handler = new ProducerConfigurationMessageHandler(
kafkaTemplate, destination.getName(), producerProperties, producerFB);
if (errorChannel != null) {
handler.setSendFailureChannel(errorChannel);
}
if (StringUtils.hasText(producerProperties.getExtension().getRecordMetadataChannel())) {
handler.setSendSuccessChannelName(producerProperties.getExtension().getRecordMetadataChannel());
}
KafkaHeaderMapper mapper = null;
if (this.configurationProperties.getHeaderMapperBeanName() != null) {
mapper = getApplicationContext().getBean(
@@ -374,17 +401,40 @@ public class KafkaMessageChannelBinder extends
if (!patterns.contains("!" + MessageHeaders.ID)) {
patterns.add(0, "!" + MessageHeaders.ID);
}
mapper = new BinderHeaderMapper(
mapper = new DefaultKafkaHeaderMapper(
patterns.toArray(new String[patterns.size()]));
}
else {
mapper = new BinderHeaderMapper();
mapper = new DefaultKafkaHeaderMapper();
}
}
handler.setHeaderMapper(mapper);
return handler;
}
@Override
protected void postProcessOutputChannel(MessageChannel outputChannel,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties) {
if (expressionInterceptorNeeded(producerProperties)) {
((AbstractMessageChannel) outputChannel).addInterceptor(0, new KafkaExpressionEvaluatingInterceptor(
producerProperties.getExtension().getMessageKeyExpression(), getEvaluationContext()));
}
}
private boolean expressionInterceptorNeeded(
ExtendedProducerProperties<KafkaProducerProperties> producerProperties) {
if (producerProperties.isUseNativeEncoding()) {
return false; // payload will be intact when it reaches the adapter
}
else {
Expression messageKeyExpression = producerProperties.getExtension().getMessageKeyExpression();
return messageKeyExpression != null
&& interceptorNeededPattern.matcher(messageKeyExpression.getExpressionString()).find();
}
}
protected DefaultKafkaProducerFactory<byte[], byte[]> getProducerFactory(
String transactionIdPrefix,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties) {
@@ -487,14 +537,15 @@ public class KafkaMessageChannelBinder extends
Assert.isTrue(!CollectionUtils.isEmpty(listenedPartitions),
"A list of partitions must be provided");
}
final TopicPartitionInitialOffset[] topicPartitionInitialOffsets = getTopicPartitionInitialOffsets(
listenedPartitions);
final TopicPartitionOffset[] topicPartitionOffsets = groupManagement
? null
: getTopicPartitionOffsets(listenedPartitions, extendedConsumerProperties, consumerFactory);
final ContainerProperties containerProperties = anonymous
|| extendedConsumerProperties.getExtension().isAutoRebalanceEnabled()
|| groupManagement
? usingPatterns
? new ContainerProperties(Pattern.compile(topics[0]))
: new ContainerProperties(topics)
: new ContainerProperties(topicPartitionInitialOffsets);
: new ContainerProperties(topicPartitionOffsets);
if (this.transactionManager != null) {
containerProperties.setTransactionManager(this.transactionManager);
}
@@ -511,8 +562,7 @@ public class KafkaMessageChannelBinder extends
if (groupManagement && listenedPartitions.isEmpty()) {
concurrency = extendedConsumerProperties.getConcurrency();
}
resetOffsets(extendedConsumerProperties, consumerFactory, groupManagement,
containerProperties);
resetOffsetsForAutoRebalance(extendedConsumerProperties, consumerFactory, containerProperties);
@SuppressWarnings("rawtypes")
final ConcurrentMessageListenerContainer<?, ?> messageListenerContainer = new ConcurrentMessageListenerContainer(
consumerFactory, containerProperties) {
@@ -533,7 +583,7 @@ public class KafkaMessageChannelBinder extends
messageListenerContainer
.setApplicationEventPublisher(getApplicationContext());
}
messageListenerContainer.setBeanName(topics + ".container");
messageListenerContainer.setBeanName(destination + ".container");
// end of these won't be needed...
if (!extendedConsumerProperties.getExtension().isAutoCommitOffset()) {
messageListenerContainer.getContainerProperties()
@@ -579,6 +629,7 @@ public class KafkaMessageChannelBinder extends
public void setupRebalanceListener(
final ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties,
final ContainerProperties containerProperties) {
Assert.isTrue(!extendedConsumerProperties.getExtension().isResetOffsets(),
"'resetOffsets' cannot be set when a KafkaBindingRebalanceListener is provided");
final String bindingName = bindingNameHolder.get();
@@ -588,7 +639,7 @@ public class KafkaMessageChannelBinder extends
containerProperties
.setConsumerRebalanceListener(new ConsumerAwareRebalanceListener() {
private boolean initial = true;
private final ThreadLocal<Boolean> initialAssignment = new ThreadLocal<>();
@Override
public void onPartitionsRevokedBeforeCommit(Consumer<?, ?> consumer,
@@ -610,11 +661,15 @@ public class KafkaMessageChannelBinder extends
public void onPartitionsAssigned(Consumer<?, ?> consumer,
Collection<TopicPartition> partitions) {
try {
Boolean initial = this.initialAssignment.get();
if (initial == null) {
initial = Boolean.TRUE;
}
userRebalanceListener.onPartitionsAssigned(bindingName,
consumer, partitions, this.initial);
consumer, partitions, initial);
}
finally {
this.initial = false;
this.initialAssignment.set(Boolean.FALSE);
}
}
@@ -653,28 +708,24 @@ public class KafkaMessageChannelBinder extends
* Reset the offsets if needed; may update the offsets in in the container's
* topicPartitionInitialOffsets.
*/
private void resetOffsets(
private void resetOffsetsForAutoRebalance(
final ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties,
final ConsumerFactory<?, ?> consumerFactory, boolean groupManagement,
final ContainerProperties containerProperties) {
final ConsumerFactory<?, ?> consumerFactory, final ContainerProperties containerProperties) {
boolean resetOffsets = extendedConsumerProperties.getExtension().isResetOffsets();
final Object resetTo = consumerFactory.getConfigurationProperties()
.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG);
final AtomicBoolean initialAssignment = new AtomicBoolean(true);
if (!"earliest".equals(resetTo) && !"latest".equals(resetTo)) {
logger.warn("no (or unknown) " + ConsumerConfig.AUTO_OFFSET_RESET_CONFIG
+ " property cannot reset");
resetOffsets = false;
}
if (groupManagement && resetOffsets) {
containerProperties
.setConsumerRebalanceListener(new ConsumerAwareRebalanceListener() {
final Object resetTo = checkReset(extendedConsumerProperties.getExtension().isResetOffsets(),
consumerFactory.getConfigurationProperties()
.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG));
if (resetTo != null) {
Set<TopicPartition> sought = ConcurrentHashMap.newKeySet();
containerProperties.setConsumerRebalanceListener(new ConsumerAwareRebalanceListener() {
@Override
public void onPartitionsRevokedBeforeCommit(
Consumer<?, ?> consumer, Collection<TopicPartition> tps) {
// no op
if (logger.isInfoEnabled()) {
logger.info("Partitions revoked: " + tps);
}
}
@Override
@@ -684,27 +735,41 @@ public class KafkaMessageChannelBinder extends
}
@Override
public void onPartitionsAssigned(Consumer<?, ?> consumer,
Collection<TopicPartition> tps) {
if (initialAssignment.getAndSet(false)) {
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> tps) {
if (logger.isInfoEnabled()) {
logger.info("Partitions assigned: " + tps);
}
List<TopicPartition> toSeek = tps.stream()
.filter(tp -> {
boolean shouldSeek = !sought.contains(tp);
sought.add(tp);
return shouldSeek;
})
.collect(Collectors.toList());
if (toSeek.size() > 0) {
if ("earliest".equals(resetTo)) {
consumer.seekToBeginning(tps);
consumer.seekToBeginning(toSeek);
}
else if ("latest".equals(resetTo)) {
consumer.seekToEnd(tps);
consumer.seekToEnd(toSeek);
}
}
}
});
}
else if (resetOffsets) {
Arrays.stream(containerProperties.getTopicPartitions())
.map(tpio -> new TopicPartitionInitialOffset(tpio.topic(),
tpio.partition(),
"earliest".equals(resetTo) ? SeekPosition.BEGINNING
: SeekPosition.END))
.collect(Collectors.toList())
.toArray(containerProperties.getTopicPartitions());
}
private Object checkReset(boolean resetOffsets, final Object resetTo) {
if (!resetOffsets) {
return null;
}
else if (!"earliest".equals(resetTo) && !"latest".equals(resetTo)) {
logger.warn("no (or unknown) " + ConsumerConfig.AUTO_OFFSET_RESET_CONFIG
+ " property cannot reset");
return null;
}
else {
return resetTo;
}
}
@@ -823,7 +888,7 @@ public class KafkaMessageChannelBinder extends
KafkaHeaderMapper.class);
}
if (mapper == null) {
BinderHeaderMapper headerMapper = new BinderHeaderMapper() {
DefaultKafkaHeaderMapper headerMapper = new DefaultKafkaHeaderMapper() {
@Override
public void toHeaders(Headers source, Map<String, Object> headers) {
@@ -882,8 +947,7 @@ public class KafkaMessageChannelBinder extends
return (message) -> {
final ConsumerRecord<Object, Object> record = message.getHeaders()
.get(KafkaHeaders.RAW_DATA, ConsumerRecord.class);
ConsumerRecord<Object, Object> record = StaticMessageHeaderAccessor.getSourceData(message);
if (properties.isUseNativeDecoding()) {
if (record != null) {
@@ -1080,17 +1144,26 @@ public class KafkaMessageChannelBinder extends
&& properties.getExtension().isEnableDlq();
}
private TopicPartitionInitialOffset[] getTopicPartitionInitialOffsets(
Collection<PartitionInfo> listenedPartitions) {
final TopicPartitionInitialOffset[] topicPartitionInitialOffsets = new TopicPartitionInitialOffset[listenedPartitions
.size()];
private TopicPartitionOffset[] getTopicPartitionOffsets(
Collection<PartitionInfo> listenedPartitions,
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties,
ConsumerFactory<?, ?> consumerFactory) {
final TopicPartitionOffset[] TopicPartitionOffsets =
new TopicPartitionOffset[listenedPartitions.size()];
int i = 0;
SeekPosition seekPosition = null;
Object resetTo = checkReset(extendedConsumerProperties.getExtension().isResetOffsets(),
consumerFactory.getConfigurationProperties().get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG));
if (resetTo != null) {
seekPosition = "earliest".equals(resetTo) ? SeekPosition.BEGINNING : SeekPosition.END;
}
for (PartitionInfo partition : listenedPartitions) {
topicPartitionInitialOffsets[i++] = new TopicPartitionInitialOffset(
partition.topic(), partition.partition());
TopicPartitionOffsets[i++] = new TopicPartitionOffset(
partition.topic(), partition.partition(), seekPosition);
}
return topicPartitionInitialOffsets;
return TopicPartitionOffsets;
}
private String toDisplayString(String original, int maxCharacters) {
@@ -1108,7 +1181,7 @@ public class KafkaMessageChannelBinder extends
}
private final class ProducerConfigurationMessageHandler
extends KafkaProducerMessageHandler<byte[], byte[]> implements Lifecycle {
extends KafkaProducerMessageHandler<byte[], byte[]> {
private boolean running = true;
@@ -1118,14 +1191,24 @@ public class KafkaMessageChannelBinder extends
String topic,
ExtendedProducerProperties<KafkaProducerProperties> producerProperties,
ProducerFactory<byte[], byte[]> producerFactory) {
super(kafkaTemplate);
setTopicExpression(new LiteralExpression(topic));
setMessageKeyExpression(
producerProperties.getExtension().getMessageKeyExpression());
if (producerProperties.getExtension().isUseTopicHeader()) {
setTopicExpression(PARSER.parseExpression("headers['" + KafkaHeaders.TOPIC + "'] ?: '" + topic + "'"));
}
else {
setTopicExpression(new LiteralExpression(topic));
}
Expression messageKeyExpression = producerProperties.getExtension().getMessageKeyExpression();
if (expressionInterceptorNeeded(producerProperties)) {
messageKeyExpression = PARSER.parseExpression("headers['"
+ KafkaExpressionEvaluatingInterceptor.MESSAGE_KEY_HEADER
+ "']");
}
setMessageKeyExpression(messageKeyExpression);
setBeanFactory(KafkaMessageChannelBinder.this.getBeanFactory());
if (producerProperties.isPartitioned()) {
SpelExpressionParser parser = new SpelExpressionParser();
setPartitionIdExpression(parser.parseExpression(
setPartitionIdExpression(PARSER.parseExpression(
"headers['" + BinderHeaders.PARTITION_HEADER + "']"));
}
if (producerProperties.getExtension().isSync()) {

View File

@@ -105,7 +105,7 @@ public class KafkaBinderAutoConfigurationPropertiesTest {
assertThat(consumerConfigs.get("value.deserializer"))
.isEqualTo(LongDeserializer.class);
assertThat(consumerConfigs.get("value.serialized")).isNull();
assertThat(consumerConfigs.get("group.id")).isEqualTo("groupIdFromBootConfig");
assertThat(consumerConfigs.get("group.id")).isEqualTo("test");
assertThat(consumerConfigs.get("auto.offset.reset")).isEqualTo("earliest");
assertThat((((List<String>) consumerConfigs.get("bootstrap.servers"))
.containsAll(bootstrapServers))).isTrue();

View File

@@ -33,6 +33,7 @@ import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
import java.util.stream.IntStream;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.kafka.clients.admin.AdminClient;
@@ -48,6 +49,7 @@ import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.KafkaFuture;
import org.apache.kafka.common.errors.TopicExistsException;
import org.apache.kafka.common.record.TimestampType;
@@ -112,7 +114,7 @@ import org.springframework.kafka.listener.MessageListenerContainer;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.SendResult;
import org.springframework.kafka.support.TopicPartitionInitialOffset;
import org.springframework.kafka.support.TopicPartitionOffset;
import org.springframework.kafka.support.converter.MessagingMessageConverter;
import org.springframework.kafka.test.core.BrokerAddress;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
@@ -129,6 +131,7 @@ import org.springframework.messaging.support.ErrorMessage;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
import org.springframework.util.MimeType;
import org.springframework.util.MimeTypeUtils;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SettableListenableFuture;
@@ -362,9 +365,9 @@ public class KafkaBinderTests extends
.get(MessageHeaders.CONTENT_TYPE))
.isEqualTo(MimeTypeUtils.TEXT_PLAIN);
Assertions.assertThat(inboundMessageRef.get().getHeaders().get("foo"))
.isInstanceOf(String.class);
String actual = (String) inboundMessageRef.get().getHeaders().get("foo");
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN.toString());
.isInstanceOf(MimeType.class);
MimeType actual = (MimeType) inboundMessageRef.get().getHeaders().get("foo");
Assertions.assertThat(actual).isEqualTo(MimeTypeUtils.TEXT_PLAIN);
producerBinding.unbind();
consumerBinding.unbind();
}
@@ -1182,38 +1185,57 @@ public class KafkaBinderTests extends
QueueChannel moduleInputChannel = new QueueChannel();
ExtendedProducerProperties<KafkaProducerProperties> producer1Props = createProducerProperties();
producer1Props.getExtension().setUseTopicHeader(true);
Binding<MessageChannel> producerBinding1 = binder.bindProducer("foo.x",
moduleOutputChannel1, createProducerProperties());
moduleOutputChannel1, producer1Props);
Binding<MessageChannel> producerBinding2 = binder.bindProducer("foo.y",
moduleOutputChannel2, createProducerProperties());
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.getExtension().setAutoRebalanceEnabled(false);
Binding<MessageChannel> consumerBinding1 = binder.bindConsumer("foo.x", "test",
Binding<MessageChannel> consumerBinding1 = binder.bindConsumer("foo.x", "test1",
moduleInputChannel, consumerProperties);
Binding<MessageChannel> consumerBinding2 = binder.bindConsumer("foo.y", "test",
Binding<MessageChannel> consumerBinding2 = binder.bindConsumer("foo.y", "test2",
moduleInputChannel, consumerProperties);
String testPayload1 = "foo" + UUID.randomUUID().toString();
String testPayload1 = "foo1";
Message<?> message1 = org.springframework.integration.support.MessageBuilder
.withPayload(testPayload1.getBytes()).build();
String testPayload2 = "foo" + UUID.randomUUID().toString();
String testPayload2 = "foo2";
Message<?> message2 = org.springframework.integration.support.MessageBuilder
.withPayload(testPayload2.getBytes()).build();
String testPayload3 = "foo3";
Message<?> message3 = org.springframework.integration.support.MessageBuilder
.withPayload(testPayload3.getBytes())
.setHeader(KafkaHeaders.TOPIC, "foo.y")
.build();
// Let the consumer actually bind to the producer before sending a msg
binderBindUnbindLatency();
moduleOutputChannel1.send(message1);
moduleOutputChannel2.send(message2);
moduleOutputChannel1.send(message3);
Message<?>[] messages = new Message[2];
Message<?>[] messages = new Message[3];
messages[0] = receive(moduleInputChannel);
messages[1] = receive(moduleInputChannel);
messages[2] = receive(moduleInputChannel);
assertThat(messages[0]).isNotNull();
assertThat(messages[1]).isNotNull();
assertThat(messages[1]).isNotNull();
assertThat(messages).extracting("payload").containsExactlyInAnyOrder(
testPayload1.getBytes(), testPayload2.getBytes());
testPayload1.getBytes(), testPayload2.getBytes(), testPayload3.getBytes());
Arrays.asList(messages).forEach(message -> {
if (new String((byte[]) message.getPayload()).equals("foo1")) {
assertThat(message.getHeaders().get(KafkaHeaders.RECEIVED_TOPIC)).isEqualTo("foo.x");
}
else {
assertThat(message.getHeaders().get(KafkaHeaders.RECEIVED_TOPIC)).isEqualTo("foo.y");
}
});
producerBinding1.unbind();
producerBinding2.unbind();
@@ -1521,8 +1543,13 @@ public class KafkaBinderTests extends
input3, consumerProperties);
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.setPartitionKeyExtractorClass(PartitionTestSupport.class);
producerProperties.setPartitionSelectorClass(PartitionTestSupport.class);
this.applicationContext.registerBean("pkExtractor",
PartitionTestSupport.class, () -> new PartitionTestSupport());
this.applicationContext.registerBean("pkSelector",
PartitionTestSupport.class, () -> new PartitionTestSupport());
producerProperties.setPartitionKeyExtractorName("pkExtractor");
producerProperties.setPartitionSelectorName("pkSelector");
producerProperties.setPartitionCount(3); // overridden to 8 on the actual topic
DirectChannel output = createBindableChannel("output",
createProducerBindingProperties(producerProperties));
@@ -1645,8 +1672,12 @@ public class KafkaBinderTests extends
Binder binder = getBinder();
ExtendedProducerProperties<KafkaProducerProperties> properties = createProducerProperties();
properties.setHeaderMode(HeaderMode.none);
properties.setPartitionKeyExtractorClass(RawKafkaPartitionTestSupport.class);
properties.setPartitionSelectorClass(RawKafkaPartitionTestSupport.class);
this.applicationContext.registerBean("pkExtractor",
RawKafkaPartitionTestSupport.class, () -> new RawKafkaPartitionTestSupport());
this.applicationContext.registerBean("pkSelector",
RawKafkaPartitionTestSupport.class, () -> new RawKafkaPartitionTestSupport());
properties.setPartitionKeyExtractorName("pkExtractor");
properties.setPartitionSelectorName("pkSelector");
properties.setPartitionCount(6);
DirectChannel output = createBindableChannel("output",
@@ -2424,14 +2455,15 @@ public class KafkaBinderTests extends
binding = binder.bindConsumer(testTopicName, "test-x", input,
consumerProperties);
TopicPartitionInitialOffset[] listenedPartitions = TestUtils.getPropertyValue(
ContainerProperties containerProps = TestUtils.getPropertyValue(
binding,
"lifecycle.messageListenerContainer.containerProperties.topicPartitions",
TopicPartitionInitialOffset[].class);
"lifecycle.messageListenerContainer.containerProperties",
ContainerProperties.class);
TopicPartitionOffset[] listenedPartitions = containerProps.getTopicPartitionsToAssign();
assertThat(listenedPartitions).hasSize(2);
assertThat(listenedPartitions).contains(
new TopicPartitionInitialOffset(testTopicName, 2),
new TopicPartitionInitialOffset(testTopicName, 5));
new TopicPartitionOffset(testTopicName, 2),
new TopicPartitionOffset(testTopicName, 5));
int partitions = invokePartitionSize(testTopicName);
assertThat(partitions).isEqualTo(6);
}
@@ -3007,6 +3039,155 @@ public class KafkaBinderTests extends
}
}
@Test
@SuppressWarnings("unchecked")
public void testResetOffsets() throws Exception {
Binding<?> producerBinding = null;
Binding<?> consumerBinding = null;
try {
String testPayload = "test";
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
DirectChannel moduleOutputChannel = createBindableChannel("output",
createProducerBindingProperties(producerProperties));
ExtendedConsumerProperties<KafkaConsumerProperties> consumerProperties = createConsumerProperties();
consumerProperties.setConcurrency(2);
consumerProperties.setInstanceCount(5); // 10 partitions across 2 threads
consumerProperties.getExtension().setResetOffsets(true);
DirectChannel moduleInputChannel = createBindableChannel("input",
createConsumerBindingProperties(consumerProperties));
String testTopicName = "existing" + System.currentTimeMillis();
KafkaBinderConfigurationProperties configurationProperties = createConfigurationProperties();
configurationProperties.setAutoAddPartitions(true);
Binder binder = getBinder(configurationProperties);
producerBinding = binder.bindProducer(testTopicName, moduleOutputChannel,
producerProperties);
consumerBinding = binder.bindConsumer(testTopicName, "testReset",
moduleInputChannel, consumerProperties);
// Let the consumer actually bind to the producer before sending a msg
binderBindUnbindLatency();
IntStream.range(0, 10).forEach(i -> moduleOutputChannel.send(MessageBuilder.withPayload(testPayload)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.TEXT_PLAIN)
.setHeader(KafkaHeaders.PARTITION_ID, i)
.build()));
CountDownLatch latch1 = new CountDownLatch(10);
CountDownLatch latch2 = new CountDownLatch(20);
AtomicReference<Message<byte[]>> inboundMessageRef = new AtomicReference<>();
AtomicInteger received = new AtomicInteger();
moduleInputChannel.subscribe(message1 -> {
try {
inboundMessageRef.set((Message<byte[]>) message1);
}
finally {
received.incrementAndGet();
latch1.countDown();
latch2.countDown();
}
});
assertThat(latch1.await(10, TimeUnit.SECONDS)).as("Failed to receive messages").isTrue();
consumerBinding.unbind();
consumerBinding = binder.bindConsumer(testTopicName, "testReset",
moduleInputChannel, consumerProperties);
assertThat(latch2.await(10, TimeUnit.SECONDS)).as("Failed to receive message").isTrue();
binder.bindConsumer(testTopicName + "-x", "testReset",
moduleInputChannel, consumerProperties).unbind(); // cause another rebalance
assertThat(received.get()).as("Unexpected reset").isEqualTo(20);
assertThat(inboundMessageRef.get()).isNotNull();
assertThat(inboundMessageRef.get().getPayload()).isEqualTo("test".getBytes());
assertThat(inboundMessageRef.get().getHeaders()).containsEntry("contentType",
MimeTypeUtils.TEXT_PLAIN);
}
finally {
if (producerBinding != null) {
producerBinding.unbind();
}
if (consumerBinding != null) {
consumerBinding.unbind();
}
}
}
@Test
@SuppressWarnings("unchecked")
public void testRecordMetadata() throws Exception {
Binding<?> producerBinding = null;
try {
String testPayload = "test";
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.getExtension().setRecordMetadataChannel("metaChannel");
QueueChannel metaChannel = new QueueChannel();
DirectChannel moduleOutputChannel = createBindableChannel("output",
createProducerBindingProperties(producerProperties));
String testTopicName = "existing" + System.currentTimeMillis();
KafkaTestBinder binder = getBinder();
((GenericApplicationContext) binder.getApplicationContext()).registerBean("metaChannel",
MessageChannel.class, () -> metaChannel);
producerBinding = binder.bindProducer(testTopicName, moduleOutputChannel,
producerProperties);
moduleOutputChannel
.send(new GenericMessage<>("foo", Collections.singletonMap(KafkaHeaders.PARTITION_ID, 0)));
Message<?> sendResult = metaChannel.receive(10_000);
assertThat(sendResult).isNotNull();
RecordMetadata meta = sendResult.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class);
assertThat(meta).isNotNull()
.hasFieldOrPropertyWithValue("topic", testTopicName)
.hasFieldOrPropertyWithValue("partition", 0)
.hasFieldOrPropertyWithValue("offset", 0L);
}
finally {
if (producerBinding != null) {
producerBinding.unbind();
}
}
}
@Test
@SuppressWarnings("unchecked")
public void testMessageKeyInPayload() throws Exception {
Binding<?> producerBinding = null;
try {
String testPayload = "test";
ExtendedProducerProperties<KafkaProducerProperties> producerProperties = createProducerProperties();
producerProperties.getExtension()
.setMessageKeyExpression(spelExpressionParser.parseExpression("payload.field.bytes"));
DirectChannel moduleOutputChannel = createBindableChannel("output",
createProducerBindingProperties(producerProperties));
String testTopicName = "existing" + System.currentTimeMillis();
KafkaTestBinder binder = getBinder();
producerBinding = binder.bindProducer(testTopicName, moduleOutputChannel,
producerProperties);
moduleOutputChannel.addInterceptor(new ChannelInterceptor() {
@Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
assertThat(message.getHeaders()
.get(KafkaExpressionEvaluatingInterceptor.MESSAGE_KEY_HEADER))
.isEqualTo("foo".getBytes());
return message;
}
});
moduleOutputChannel.send(
new GenericMessage<>(new Pojo("foo"), Collections.singletonMap(KafkaHeaders.PARTITION_ID, 0)));
}
finally {
if (producerBinding != null) {
producerBinding.unbind();
}
}
}
private final class FailingInvocationCountingMessageHandler
implements MessageHandler {
@@ -3052,4 +3233,26 @@ public class KafkaBinderTests extends
}
public static class Pojo {
private String field;
public Pojo() {
super();
}
public Pojo(String field) {
this.field = field;
}
public String getField() {
return this.field;
}
public void setField(String field) {
this.field = field;
}
}
}

View File

@@ -45,11 +45,13 @@ import org.springframework.cloud.stream.binder.ExtendedConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
import org.springframework.cloud.stream.binder.kafka.properties.KafkaConsumerProperties;
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
import org.springframework.context.support.GenericApplicationContext;
import org.springframework.integration.channel.DirectChannel;
import org.springframework.integration.test.util.TestUtils;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.messaging.MessageChannel;
import static org.assertj.core.api.Assertions.assertThat;
@@ -175,6 +177,7 @@ public class KafkaBinderUnitTests {
private void testOffsetResetWithGroupManagement(final boolean earliest,
boolean groupManage, String topic, String group) throws Exception {
final List<TopicPartition> partitions = new ArrayList<>();
partitions.add(new TopicPartition(topic, 0));
partitions.add(new TopicPartition(topic, 1));
@@ -218,8 +221,18 @@ public class KafkaBinderUnitTests {
latch.countDown();
return null;
}).given(consumer).seekToEnd(any());
class Customizer implements ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> {
@Override
public void configure(AbstractMessageListenerContainer<?, ?> container, String destinationName,
String group) {
container.getContainerProperties().setMissingTopicsFatal(false);
}
}
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(
configurationProperties, provisioningProvider) {
configurationProperties, provisioningProvider, new Customizer(), null) {
@Override
protected ConsumerFactory<?, ?> createKafkaConsumerFactory(boolean anonymous,

View File

@@ -17,9 +17,12 @@
package org.springframework.cloud.stream.binder.kafka;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.junit.ClassRule;
@@ -54,8 +57,15 @@ import static org.mockito.Mockito.spy;
*/
public class KafkaTransactionTests {
private static Map<String, String> brokerProperties = new HashMap<>();
static {
brokerProperties.put("transaction.state.log.replication.factor", "1");
brokerProperties.put("transaction.state.log.min.isr", "1");
}
@ClassRule
public static final EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1);
public static final EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1).brokerProperties(brokerProperties);
@SuppressWarnings({ "rawtypes", "unchecked" })
@Test
@@ -71,6 +81,12 @@ public class KafkaTransactionTests {
configurationProperties, kafkaProperties);
provisioningProvider.setMetadataRetryOperations(new RetryTemplate());
final Producer mockProducer = mock(Producer.class);
KafkaProducerProperties extension1 = configurationProperties
.getTransaction().getProducer().getExtension();
extension1.getConfiguration().put(ProducerConfig.RETRIES_CONFIG, "1");
extension1.getConfiguration().put(ProducerConfig.ACKS_CONFIG, "all");
willReturn(Collections.singletonList(new TopicPartition("foo", 0)))
.given(mockProducer).partitionsFor(anyString());
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(
@@ -83,7 +99,7 @@ public class KafkaTransactionTests {
DefaultKafkaProducerFactory<byte[], byte[]> producerFactory = spy(
super.getProducerFactory(transactionIdPrefix,
producerProperties));
willReturn(mockProducer).given(producerFactory).createProducer();
willReturn(mockProducer).given(producerFactory).createProducer("foo-");
return producerFactory;
}

View File

@@ -26,7 +26,7 @@ import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Sink;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import static org.assertj.core.api.Assertions.assertThat;
@@ -36,7 +36,7 @@ import static org.assertj.core.api.Assertions.assertThat;
public class MultiBinderMeterRegistryTest {
@ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, 10);
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, 10);
@Test
public void testMetricsWorkWithMultiBinders() {
@@ -48,7 +48,7 @@ public class MultiBinderMeterRegistryTest {
"--spring.cloud.stream.binders.inbound.type=kafka",
"--spring.cloud.stream.binders.inbound.environment"
+ ".spring.cloud.stream.kafka.binder.brokers" + "="
+ embeddedKafka.getBrokersAsString());
+ embeddedKafka.getEmbeddedKafka().getBrokersAsString());
final MeterRegistry meterRegistry = applicationContext.getBean(MeterRegistry.class);

View File

@@ -48,6 +48,7 @@ import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
@@ -64,6 +65,7 @@ import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = "spring.cloud.stream.bindings.input.group="
+ KafkaBinderActuatorTests.TEST_CONSUMER_GROUP)
// @checkstyle:on
@DirtiesContext
public class KafkaBinderActuatorTests {
static final String TEST_CONSUMER_GROUP = "testGroup-actuatorTests";

View File

@@ -51,6 +51,7 @@ import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
@@ -72,6 +73,7 @@ import static org.assertj.core.api.Assertions.assertThat;
+ "bindingSpecificPropertyShouldWinOverDefault",
"spring.cloud.stream.kafka.default.consumer.ackEachRecord=true",
"spring.cloud.stream.kafka.bindings.custom-in.consumer.ackEachRecord=false" })
@DirtiesContext
public class KafkaBinderExtendedPropertiesTest {
private static final String KAFKA_BROKERS_PROPERTY = "spring.cloud.stream.kafka.binder.brokers";

View File

@@ -22,6 +22,7 @@ import java.util.concurrent.TimeUnit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
@@ -51,6 +52,7 @@ import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.kafka.consumer.auto-offset-reset=earliest" })
@DirtiesContext
@Ignore
public class KafkaNullConverterTest {
private static final String KAFKA_BROKERS_PROPERTY = "spring.kafka.bootstrap-servers";

View File

@@ -0,0 +1,154 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.integration;
import java.util.Map;
import kafka.server.KafkaConfig;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.requests.IsolationLevel;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.BeanCreationException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.binder.BinderFactory;
import org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.kafka.test.utils.KafkaTestUtils;
import org.springframework.kafka.transaction.KafkaTransactionManager;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import org.springframework.transaction.annotation.Transactional;
import org.springframework.transaction.support.AbstractPlatformTransactionManager;
import org.springframework.transaction.support.TransactionSynchronizationManager;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Gary Russell
* @since 2.1.4
*
*/
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix=tx.",
"spring.cloud.stream.kafka.binder.transaction.producer.configuration.retries=99",
"spring.cloud.stream.kafka.binder.transaction.producer.configuration.acks=all"})
@DirtiesContext
public class ProducerOnlyTransactionTests {
private static final String KAFKA_BROKERS_PROPERTY = "spring.cloud.stream.kafka.binder.brokers";
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, "output")
.brokerProperty(KafkaConfig.TransactionsTopicReplicationFactorProp(), "1")
.brokerProperty(KafkaConfig.TransactionsTopicMinISRProp(), "1");
@Autowired
private Sender sender;
@Autowired
private MessageChannel output;
@BeforeClass
public static void setup() {
System.setProperty(KAFKA_BROKERS_PROPERTY,
embeddedKafka.getEmbeddedKafka().getBrokersAsString());
}
@AfterClass
public static void clean() {
System.clearProperty(KAFKA_BROKERS_PROPERTY);
}
@Test
public void testProducerTx() {
this.sender.DoInTransaction(this.output);
assertThat(this.sender.isInTx()).isTrue();
Map<String, Object> props = KafkaTestUtils.consumerProps("consumeTx", "false",
embeddedKafka.getEmbeddedKafka());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED.name().toLowerCase());
Consumer<?, ?> consumer = new KafkaConsumer<>(props);
embeddedKafka.getEmbeddedKafka().consumeFromAllEmbeddedTopics(consumer);
ConsumerRecord<?, ?> record = KafkaTestUtils.getSingleRecord(consumer, "output");
assertThat(record.value()).isEqualTo("foo".getBytes());
}
@EnableBinding(Source.class)
@EnableAutoConfiguration
@EnableTransactionManagement
public static class Config {
@Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
try {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
KafkaTransactionManager<byte[], byte[]> tm = new KafkaTransactionManager<>(pf);
tm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return tm;
}
catch (BeanCreationException e) { // needed to avoid other tests in this package failing when there is no binder
return null;
}
}
@Bean
public Sender sender() {
return new Sender();
}
}
public static class Sender {
private boolean isInTx;
@Transactional
public void DoInTransaction(MessageChannel output) {
this.isInTx = TransactionSynchronizationManager.isActualTransactionActive();
output.send(new GenericMessage<>("foo"));
}
public boolean isInTx() {
return this.isInTx;
}
}
}

View File

@@ -0,0 +1,144 @@
/*
* Copyright 2019-2019 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.cloud.stream.binder.kafka.integration2;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import kafka.server.KafkaConfig;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
import org.springframework.cloud.stream.messaging.Processor;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import org.springframework.kafka.listener.DefaultAfterRollbackProcessor;
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.support.GenericMessage;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
/**
* @author Gary Russell
* @since 3.0
*
*/
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, properties = {
"spring.kafka.consumer.properties.isolation.level=read_committed",
"spring.kafka.consumer.enable-auto-commit=false",
"spring.kafka.consumer.auto-offset-reset=earliest",
"spring.cloud.stream.bindings.input.destination=consumer.producer.txIn",
"spring.cloud.stream.bindings.input.group=consumer.producer.tx",
"spring.cloud.stream.bindings.input.consumer.max-attempts=1",
"spring.cloud.stream.bindings.output.destination=consumer.producer.txOut",
"spring.cloud.stream.kafka.binder.transaction.transaction-id-prefix=tx.",
"spring.cloud.stream.kafka.binder.transaction.producer.configuration.retries=99",
"spring.cloud.stream.kafka.binder.transaction.producer.configuration.acks=all"})
@DirtiesContext
public class ConsumerProducerTransactionTests {
private static final String KAFKA_BROKERS_PROPERTY = "spring.cloud.stream.kafka.binder.brokers";
@ClassRule
public static EmbeddedKafkaRule embeddedKafka = new EmbeddedKafkaRule(1, true, "consumer.producer.txOut")
.brokerProperty(KafkaConfig.TransactionsTopicReplicationFactorProp(), "1")
.brokerProperty(KafkaConfig.TransactionsTopicMinISRProp(), "1");
@Autowired
private Config config;
@BeforeClass
public static void setup() {
System.setProperty(KAFKA_BROKERS_PROPERTY,
embeddedKafka.getEmbeddedKafka().getBrokersAsString());
System.setProperty("spring.kafka.bootstrap-servers",
embeddedKafka.getEmbeddedKafka().getBrokersAsString());
}
@AfterClass
public static void clean() {
System.clearProperty(KAFKA_BROKERS_PROPERTY);
System.clearProperty("spring.kafka.bootstrap-servers");
}
@Test
public void testProducerRunsInConsumerTransaction() throws InterruptedException {
assertThat(this.config.latch.await(10, TimeUnit.SECONDS)).isTrue();
assertThat(this.config.outs).containsExactlyInAnyOrder("ONE", "THREE");
}
@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class Config {
final List<String> outs = new ArrayList<>();
final CountDownLatch latch = new CountDownLatch(2);
@Autowired
private MessageChannel output;
@KafkaListener(id = "test.cons.prod", topics = "consumer.producer.txOut")
public void listenOut(String in) {
this.outs.add(in);
this.latch.countDown();
}
@StreamListener(Processor.INPUT)
public void listenIn(String in) {
this.output.send(new GenericMessage<>(in.toUpperCase()));
if (in.equals("two")) {
throw new RuntimeException("fail");
}
}
@Bean
public ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
template.send("consumer.producer.txIn", "one".getBytes());
template.send("consumer.producer.txIn", "two".getBytes());
template.send("consumer.producer.txIn", "three".getBytes());
};
}
@Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> container
.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(0));
}
}
}

View File

@@ -4,7 +4,7 @@
<pattern>%d{ISO8601} %5p %t %c{2}:%L - %m%n</pattern>
</encoder>
</appender>
<logger name="org.apache.kafka" level="DEBUG"/>
<logger name="org.apache.kafka" level="INFO"/>
<logger name="org.springframework.integration.kafka" level="INFO"/>
<logger name="org.springframework.kafka" level="INFO"/>
<logger name="org.springframework.cloud.stream" level="INFO" />