Compare commits
51 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f5e12a29c1 | ||
|
|
416580e607 | ||
|
|
a81093734e | ||
|
|
5f4e3eebc5 | ||
|
|
cb42e80dac | ||
|
|
4fb5037fd7 | ||
|
|
d8b8c8d9fd | ||
|
|
9e4a1075d4 | ||
|
|
79497ad264 | ||
|
|
7e48afa005 | ||
|
|
87ac491230 | ||
|
|
a4ad9e2c0b | ||
|
|
4e6881830a | ||
|
|
829ce1cf7e | ||
|
|
c5289589c1 | ||
|
|
dd607627ed | ||
|
|
0ea4315af8 | ||
|
|
f25dbff2b7 | ||
|
|
f8b290844b | ||
|
|
a7e025794c | ||
|
|
634a73c9ff | ||
|
|
bc2f692964 | ||
|
|
7eefe6c567 | ||
|
|
c5d108ef89 | ||
|
|
47e7ca07a8 | ||
|
|
7cc001ac4c | ||
|
|
a7299df63f | ||
|
|
33aa926940 | ||
|
|
a1fb7f0a2d | ||
|
|
e2eca34e4b | ||
|
|
bc02da2900 | ||
|
|
1f52ec5020 | ||
|
|
394b8a6685 | ||
|
|
32939e30fe | ||
|
|
cadb5422cc | ||
|
|
99880ba216 | ||
|
|
9e9e0f7ea3 | ||
|
|
e1648083e6 | ||
|
|
87491118c3 | ||
|
|
cf7acb23e8 | ||
|
|
1fbb6f250e | ||
|
|
ae4abe4f33 | ||
|
|
5d312228db | ||
|
|
69c5b67126 | ||
|
|
83cfdfc532 | ||
|
|
c6154eecfc | ||
|
|
f8a4488a0e | ||
|
|
ffde5d35db | ||
|
|
5921d464ed | ||
|
|
023d3df7f7 | ||
|
|
846534fe84 |
62
README.adoc
62
README.adoc
@@ -213,13 +213,17 @@ This property is deprecated as of 3.1 in favor of using `ackMode`.
|
||||
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
|
||||
+
|
||||
Default: `false`.
|
||||
|
||||
autoCommitOffset::
|
||||
|
||||
Starting with version 3.1, this property is deprecated.
|
||||
See `ackMode` for more details on alternatives.
|
||||
Whether to autocommit offsets when a message has been processed.
|
||||
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
|
||||
Applications may use this header for acknowledging messages.
|
||||
See the examples section for details.
|
||||
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
|
||||
Also see `ackEachRecord`. This property is deprecated as of 3.1. See `ackMode` for more details.
|
||||
Also see `ackEachRecord`.
|
||||
+
|
||||
Default: `true`.
|
||||
ackMode::
|
||||
@@ -228,22 +232,22 @@ This is based on the AckMode enumeration defined in Spring Kafka.
|
||||
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
|
||||
|
||||
autoCommitOnError::
|
||||
Effective only if `autoCommitOffset` is set to `true`.
|
||||
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
|
||||
If set to `true`, it always auto-commits (if auto-commit is enabled).
|
||||
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
|
||||
In pollable consumers, if set to `true`, it always auto commits on error.
|
||||
If not set (the default) or false, it will not auto commit in pollable consumers.
|
||||
Note that this property is only applicable for pollable consumers.
|
||||
+
|
||||
Default: not set.
|
||||
resetOffsets::
|
||||
Whether to reset offsets on the consumer to the value provided by startOffset.
|
||||
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
|
||||
See <<reset-offsets>> for more information about this property.
|
||||
+
|
||||
Default: `false`.
|
||||
startOffset::
|
||||
The starting offset for new groups.
|
||||
Allowed values: `earliest` and `latest`.
|
||||
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
|
||||
Also see `resetOffsets` (earlier in this list).
|
||||
See <<reset-offsets>> for more information about this property.
|
||||
+
|
||||
Default: null (equivalent to `earliest`).
|
||||
enableDlq::
|
||||
@@ -334,11 +338,38 @@ To achieve exactly once consumption and production of records, the consumer and
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[reset-offsets]]
|
||||
==== Resetting Offsets
|
||||
|
||||
When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`.
|
||||
If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply.
|
||||
i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`.
|
||||
By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`.
|
||||
These defaults can be overridden by setting the `startOffset` binding property.
|
||||
There will be no committed offset(s) the first time the binding is started with a particular `group`.
|
||||
The other condition where no committed offset exists is if the offset has been expired.
|
||||
With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group.
|
||||
See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information.
|
||||
|
||||
When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored.
|
||||
|
||||
Following are two use cases when this might be used.
|
||||
|
||||
1. Consuming from a compacted topic containing key/value pairs.
|
||||
Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions.
|
||||
|
||||
2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running.
|
||||
Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions.
|
||||
|
||||
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
|
||||
|
||||
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets` should not be set to `true`, otherwise, that will cause an error.
|
||||
|
||||
==== Consuming Batches
|
||||
|
||||
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
|
||||
Otherwise, the method will be called with one record at a time.
|
||||
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
|
||||
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
|
||||
|
||||
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
|
||||
|
||||
@@ -808,6 +839,23 @@ When the binder discovers that these customizers are available as beans, it will
|
||||
|
||||
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
|
||||
|
||||
[[admin-client-config-customization]]
|
||||
=== Customizing AdminClient Configuration
|
||||
|
||||
As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`.
|
||||
AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization.
|
||||
Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer.
|
||||
Here is an example of providing this customizer bean.
|
||||
|
||||
```
|
||||
@Bean
|
||||
public AdminClientConfigCustomizer adminClientConfigCustomizer() {
|
||||
return props -> {
|
||||
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
= Appendices
|
||||
[appendix]
|
||||
[[building]]
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
</parent>
|
||||
<packaging>jar</packaging>
|
||||
<name>spring-cloud-stream-binder-kafka-docs</name>
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
|spring.cloud.stream.dynamic-destinations | `[]` | A list of destinations that can be bound dynamically. If set, only listed destinations can be bound.
|
||||
|spring.cloud.stream.function.batch-mode | `false` |
|
||||
|spring.cloud.stream.function.bindings | |
|
||||
|spring.cloud.stream.function.definition | | Definition of functions to bind. If several functions need to be composed into one, use pipes (e.g., 'fooFunc\|barFunc')
|
||||
|spring.cloud.stream.instance-count | `1` | The number of deployed instances of an application. Default: 1. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-count" where 'foo' is the name of the binding.
|
||||
|spring.cloud.stream.instance-index | `0` | The instance id of the application: a number from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index" where 'foo' is the name of the binding.
|
||||
|spring.cloud.stream.instance-index-list | | A list of instance id's from 0 to instanceCount-1. Used for partitioning and with Kafka. NOTE: Could also be managed per individual binding "spring.cloud.stream.bindings.foo.consumer.instance-index-list" where 'foo' is the name of the binding. This setting will override the one set in 'spring.cloud.stream.instance-index'
|
||||
|
||||
@@ -77,6 +77,22 @@ NOTE: If the destination property is not set on the binding, a topic is created
|
||||
|
||||
Once built as a uber-jar (e.g., `kstream-consumer-app.jar`), you can run the above example like the following.
|
||||
|
||||
If the applications choose to define the functional beans using Spring's `Component` annotation, the binder also suppports that model.
|
||||
The above functional bean could be rewritten as below.
|
||||
|
||||
```
|
||||
@Component(name = "process")
|
||||
public class SimpleConsumer implements java.util.function.Consumer<KStream<Object, String>> {
|
||||
|
||||
@Override
|
||||
public void accept(KStream<Object, String> input) {
|
||||
input.foreach((key, value) -> {
|
||||
System.out.println("Key: " + key + " Value: " + value);
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
[source]
|
||||
----
|
||||
java -jar kstream-consumer-app.jar --spring.cloud.stream.bindings.process-in-0.destination=my-topic
|
||||
@@ -403,7 +419,7 @@ Finally, here is the `StreamListener` equivalent of the application with three i
|
||||
@SendTo("output")
|
||||
public KStream<Long, EnrichedOrder> process(
|
||||
@Input("input-1") KStream<Long, Order> ordersStream,
|
||||
@Input("input-"2) GlobalKTable<Long, Customer> customers,
|
||||
@Input("input-2") GlobalKTable<Long, Customer> customers,
|
||||
@Input("input-3") GlobalKTable<Long, Product> products) {
|
||||
|
||||
KStream<Long, CustomerOrder> customerOrdersStream = ordersStream.join(
|
||||
@@ -1624,6 +1640,118 @@ For instance, if we want to change the header key on this binding to `my_event`
|
||||
|
||||
`spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypeHeaderKey=my_event`.
|
||||
|
||||
=== Binding visualization and control in Kafka Streams binder
|
||||
|
||||
Starting with version 3.1.2, Kafka Streams binder supports binding visualization and control.
|
||||
The only two lifecycle phases supported are `STOPPED` and `STARTED`.
|
||||
The lifecycle phases `PAUSED` and `RESUMED` are not available in Kafka Streams binder.
|
||||
|
||||
In order to activate binding visualization and control, the application needs to include the following two dependencies.
|
||||
|
||||
```
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-actuator</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-web</artifactId>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
If you prefer using webflux, you can then include `spring-boot-starter-webflux` instead of the standard web dependency.
|
||||
|
||||
In addition, you also need to set the following property:
|
||||
|
||||
```
|
||||
management.endpoints.web.exposure.include=bindings
|
||||
```
|
||||
|
||||
To illustrate this feature further, let us use the following application as a guide:
|
||||
|
||||
```
|
||||
@SpringBootApplication
|
||||
public class KafkaStreamsApplication {
|
||||
|
||||
public static void main(String[] args) {
|
||||
SpringApplication.run(KafkaStreamsApplication.class, args);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public Consumer<KStream<String, String>> consumer() {
|
||||
return s -> s.foreach((key, value) -> System.out.println(value));
|
||||
}
|
||||
|
||||
@Bean
|
||||
public Function<KStream<String, String>, KStream<String, String>> function() {
|
||||
return ks -> ks;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
As we can see, the application has two Kafka Streams functions - one, a consumer and another a function.
|
||||
The consumer binding is named by default as `consumer-in-0`.
|
||||
Similarly, for the function, the input binding is `function-in-0` and the output binding is `function-out-0`.
|
||||
|
||||
Once the application is started, we can find details about the bindings using the following bindings endpoint.
|
||||
|
||||
```
|
||||
curl http://localhost:8080/actuator/bindings | jq .
|
||||
[
|
||||
{
|
||||
"bindingName": "consumer-in-0",
|
||||
"name": "consumer-in-0",
|
||||
"group": "consumer-applicationId",
|
||||
"pausable": false,
|
||||
"state": "running",
|
||||
"paused": false,
|
||||
"input": true,
|
||||
"extendedInfo": {}
|
||||
},
|
||||
{
|
||||
"bindingName": "function-in-0",
|
||||
"name": "function-in-0",
|
||||
"group": "function-applicationId",
|
||||
"pausable": false,
|
||||
"state": "running",
|
||||
"paused": false,
|
||||
"input": true,
|
||||
"extendedInfo": {}
|
||||
},
|
||||
{
|
||||
"bindingName": "function-out-0",
|
||||
"name": "function-out-0",
|
||||
"group": "function-applicationId",
|
||||
"pausable": false,
|
||||
"state": "running",
|
||||
"paused": false,
|
||||
"input": false,
|
||||
"extendedInfo": {}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
The details about all three bindings can be found above.
|
||||
|
||||
Let us now stop the consumer-in-0 binding.
|
||||
|
||||
```
|
||||
curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/consumer-in-0
|
||||
```
|
||||
|
||||
At this point, no records will be received through this binding.
|
||||
|
||||
Start the binding again.
|
||||
|
||||
```
|
||||
curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST http://localhost:8080/actuator/bindings/consumer-in-0
|
||||
```
|
||||
|
||||
When there are multiple bindings present on a single function, invoking these operations on any of those bindings will work.
|
||||
This is because all the bindings on a single function are backed by the same `StreamsBuilderFactoryBean`.
|
||||
Therefore, for the function above, either `function-in-0` or `function-out-0` will work.
|
||||
|
||||
=== Configuration Options
|
||||
|
||||
This section contains the configuration options used by the Kafka Streams binder.
|
||||
|
||||
@@ -192,13 +192,17 @@ This property is deprecated as of 3.1 in favor of using `ackMode`.
|
||||
If the `ackMode` is not set and batch mode is not enabled, `RECORD` ackMode will be used.
|
||||
+
|
||||
Default: `false`.
|
||||
|
||||
autoCommitOffset::
|
||||
|
||||
Starting with version 3.1, this property is deprecated.
|
||||
See `ackMode` for more details on alternatives.
|
||||
Whether to autocommit offsets when a message has been processed.
|
||||
If set to `false`, a header with the key `kafka_acknowledgment` of the type `org.springframework.kafka.support.Acknowledgment` header is present in the inbound message.
|
||||
Applications may use this header for acknowledging messages.
|
||||
See the examples section for details.
|
||||
When this property is set to `false`, Kafka binder sets the ack mode to `org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL` and the application is responsible for acknowledging records.
|
||||
Also see `ackEachRecord`. This property is deprecated as of 3.1. See `ackMode` for more details.
|
||||
Also see `ackEachRecord`.
|
||||
+
|
||||
Default: `true`.
|
||||
ackMode::
|
||||
@@ -207,22 +211,22 @@ This is based on the AckMode enumeration defined in Spring Kafka.
|
||||
If `ackEachRecord` property is set to `true` and consumer is not in batch mode, then this will use the ack mode of `RECORD`, otherwise, use the provided ack mode using this property.
|
||||
|
||||
autoCommitOnError::
|
||||
Effective only if `autoCommitOffset` is set to `true`.
|
||||
If set to `false`, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
|
||||
If set to `true`, it always auto-commits (if auto-commit is enabled).
|
||||
If not set (the default), it effectively has the same value as `enableDlq`, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
|
||||
In pollable consumers, if set to `true`, it always auto commits on error.
|
||||
If not set (the default) or false, it will not auto commit in pollable consumers.
|
||||
Note that this property is only applicable for pollable consumers.
|
||||
+
|
||||
Default: not set.
|
||||
resetOffsets::
|
||||
Whether to reset offsets on the consumer to the value provided by startOffset.
|
||||
Must be false if a `KafkaRebalanceListener` is provided; see <<rebalance-listener>>.
|
||||
See <<reset-offsets>> for more information about this property.
|
||||
+
|
||||
Default: `false`.
|
||||
startOffset::
|
||||
The starting offset for new groups.
|
||||
Allowed values: `earliest` and `latest`.
|
||||
If the consumer group is set explicitly for the consumer 'binding' (through `spring.cloud.stream.bindings.<channelName>.group`), 'startOffset' is set to `earliest`. Otherwise, it is set to `latest` for the `anonymous` consumer group.
|
||||
Also see `resetOffsets` (earlier in this list).
|
||||
See <<reset-offsets>> for more information about this property.
|
||||
+
|
||||
Default: null (equivalent to `earliest`).
|
||||
enableDlq::
|
||||
@@ -313,11 +317,38 @@ To achieve exactly once consumption and production of records, the consumer and
|
||||
+
|
||||
Default: none.
|
||||
|
||||
[[reset-offsets]]
|
||||
==== Resetting Offsets
|
||||
|
||||
When an application starts, the initial position in each assigned partition depends on two properties `startOffset` and `resetOffsets`.
|
||||
If `resetOffsets` is `false`, normal Kafka consumer https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset[`auto.offset.reset`] semantics apply.
|
||||
i.e. If there is no committed offset for a partition for the binding's consumer group, the position is `earliest` or `latest`.
|
||||
By default, bindings with an explicit `group` use `earliest`, and anonymous bindings (with no `group`) use `latest`.
|
||||
These defaults can be overridden by setting the `startOffset` binding property.
|
||||
There will be no committed offset(s) the first time the binding is started with a particular `group`.
|
||||
The other condition where no committed offset exists is if the offset has been expired.
|
||||
With modern brokers (since 2.1), and default broker properties, the offsets are expired 7 days after the last member leaves the group.
|
||||
See the https://kafka.apache.org/documentation/#brokerconfigs_offsets.retention.minutes[`offsets.retention.minutes`] broker property for more information.
|
||||
|
||||
When `resetOffsets` is `true`, the binder applies similar semantics to those that apply when there is no committed offset on the broker, as if this binding has never consumed from the topic; i.e. any current committed offset is ignored.
|
||||
|
||||
Following are two use cases when this might be used.
|
||||
|
||||
1. Consuming from a compacted topic containing key/value pairs.
|
||||
Set `resetOffsets` to `true` and `startOffset` to `earliest`; the binding will perform a `seekToBeginning` on all newly assigned partitions.
|
||||
|
||||
2. Consuming from a topic containing events, where you are only interested in events that occur while this binding is running.
|
||||
Set `resetOffsets` to `true` and `startOffset` to `latest`; the binding will perform a `seekToEnd` on all newly assigned partitions.
|
||||
|
||||
IMPORTANT: If a rebalance occurs after the initial assignment, the seeks will only be performed on any newly assigned partitions that were not assigned during the initial assignment.
|
||||
|
||||
For more control over topic offsets, see <<rebalance-listener>>; when a listener is provided, `resetOffsets` should not be set to `true`, otherwise, that will cause an error.
|
||||
|
||||
==== Consuming Batches
|
||||
|
||||
Starting with version 3.0, when `spring.cloud.stream.binding.<name>.consumer.batch-mode` is set to `true`, all of the records received by polling the Kafka `Consumer` will be presented as a `List<?>` to the listener method.
|
||||
Otherwise, the method will be called with one record at a time.
|
||||
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `min.fetch.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
|
||||
The size of the batch is controlled by Kafka consumer properties `max.poll.records`, `fetch.min.bytes`, `fetch.max.wait.ms`; refer to the Kafka documentation for more information.
|
||||
|
||||
Bear in mind that batch mode is not supported with `@StreamListener` - it only works with the newer functional programming model.
|
||||
|
||||
@@ -786,3 +817,20 @@ For example, if you want to gain access to a bean that is defined at the applica
|
||||
When the binder discovers that these customizers are available as beans, it will invoke the `configure` method right before creating the consumer and producer factories.
|
||||
|
||||
Both of these interfaces also provide access to both the binding and destination names so that they can be accessed while customizing producer and consumer properties.
|
||||
|
||||
[[admin-client-config-customization]]
|
||||
=== Customizing AdminClient Configuration
|
||||
|
||||
As with consumer and producer config customization above, applications can also customize the configuration for admin clients by providing an `AdminClientConfigCustomizer`.
|
||||
AdminClientConfigCustomizer's configure method provides access to the admin client properties, using which you can define further customization.
|
||||
Binder's Kafka topic provisioner gives the highest precedence for the properties given through this customizer.
|
||||
Here is an example of providing this customizer bean.
|
||||
|
||||
```
|
||||
@Bean
|
||||
public AdminClientConfigCustomizer adminClientConfigCustomizer() {
|
||||
return props -> {
|
||||
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
14
pom.xml
14
pom.xml
@@ -2,21 +2,21 @@
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
<packaging>pom</packaging>
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-build</artifactId>
|
||||
<version>3.0.0</version>
|
||||
<version>3.0.3</version>
|
||||
<relativePath />
|
||||
</parent>
|
||||
<properties>
|
||||
<java.version>1.8</java.version>
|
||||
<spring-kafka.version>2.6.3</spring-kafka.version>
|
||||
<spring-integration-kafka.version>5.4.1</spring-integration-kafka.version>
|
||||
<kafka.version>2.6.0</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.1.0</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.1.0</spring-cloud-stream.version>
|
||||
<spring-kafka.version>2.6.8</spring-kafka.version>
|
||||
<spring-integration-kafka.version>5.4.7</spring-integration-kafka.version>
|
||||
<kafka.version>2.6.2</kafka.version>
|
||||
<spring-cloud-schema-registry.version>1.1.3</spring-cloud-schema-registry.version>
|
||||
<spring-cloud-stream.version>3.1.3</spring-cloud-stream.version>
|
||||
<maven-checkstyle-plugin.failsOnError>true</maven-checkstyle-plugin.failsOnError>
|
||||
<maven-checkstyle-plugin.failsOnViolation>true</maven-checkstyle-plugin.failsOnViolation>
|
||||
<maven-checkstyle-plugin.includeTestSourceDirectory>true</maven-checkstyle-plugin.includeTestSourceDirectory>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
|
||||
<description>Spring Cloud Starter Stream Kafka</description>
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
</parent>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-core</artifactId>
|
||||
<description>Spring Cloud Stream Kafka Binder Core</description>
|
||||
|
||||
@@ -114,14 +114,8 @@ public class KafkaConsumerProperties {
|
||||
private ContainerProperties.AckMode ackMode;
|
||||
|
||||
/**
|
||||
* Effective only if autoCommitOffset is set to true.
|
||||
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
|
||||
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
|
||||
* If set to true, it always auto-commits (if auto-commit is enabled).
|
||||
* If not set (the default), it effectively has the same value as enableDlq,
|
||||
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
|
||||
* Flag to enable auto commit on error in polled consumers.
|
||||
*/
|
||||
@Deprecated
|
||||
private Boolean autoCommitOnError;
|
||||
|
||||
/**
|
||||
@@ -312,29 +306,19 @@ public class KafkaConsumerProperties {
|
||||
}
|
||||
|
||||
/**
|
||||
* @return is autocommit on error
|
||||
* @return is autocommit on error in polled consumers.
|
||||
*
|
||||
* Effective only if autoCommitOffset is set to true.
|
||||
* If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages.
|
||||
* It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures.
|
||||
* If set to true, it always auto-commits (if auto-commit is enabled).
|
||||
* If not set (the default), it effectively has the same value as enableDlq,
|
||||
* auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.
|
||||
*
|
||||
* @deprecated in favor of using an error handler and customize the container with that error handler.
|
||||
* This property accessor is only used in polled consumers.
|
||||
*/
|
||||
@Deprecated
|
||||
public Boolean getAutoCommitOnError() {
|
||||
return this.autoCommitOnError;
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param autoCommitOnError commit on error
|
||||
* @param autoCommitOnError commit on error in polled consumers.
|
||||
*
|
||||
* @deprecated in favor of using an error handler and customize the container with that error handler.
|
||||
*/
|
||||
@Deprecated
|
||||
public void setAutoCommitOnError(Boolean autoCommitOnError) {
|
||||
this.autoCommitOnError = autoCommitOnError;
|
||||
}
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
/*
|
||||
* Copyright 2021-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.provisioning;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Customizer for configuring AdminClient.
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @since 3.1.2
|
||||
*/
|
||||
@FunctionalInterface
|
||||
public interface AdminClientConfigCustomizer {
|
||||
|
||||
void configure(Map<String, Object> adminClientProperties);
|
||||
}
|
||||
@@ -105,16 +105,23 @@ public class KafkaTopicProvisioner implements
|
||||
* Create an instance.
|
||||
* @param kafkaBinderConfigurationProperties the binder configuration properties.
|
||||
* @param kafkaProperties the boot Kafka properties used to build the
|
||||
* @param adminClientConfigCustomizer to customize {@link AdminClient}.
|
||||
* {@link AdminClient}.
|
||||
*/
|
||||
public KafkaTopicProvisioner(
|
||||
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties) {
|
||||
KafkaProperties kafkaProperties,
|
||||
AdminClientConfigCustomizer adminClientConfigCustomizer) {
|
||||
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
|
||||
this.adminClientProperties = kafkaProperties.buildAdminProperties();
|
||||
this.configurationProperties = kafkaBinderConfigurationProperties;
|
||||
this.adminClientProperties = kafkaProperties.buildAdminProperties();
|
||||
normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties,
|
||||
kafkaBinderConfigurationProperties);
|
||||
// If the application provides an AdminConfig customizer
|
||||
// and overrides properties, that takes precedence.
|
||||
if (adminClientConfigCustomizer != null) {
|
||||
adminClientConfigCustomizer.configure(this.adminClientProperties);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -151,7 +158,7 @@ public class KafkaTopicProvisioner implements
|
||||
logger.info("Using kafka topic for outbound: " + name);
|
||||
}
|
||||
KafkaTopicUtils.validateTopicName(name);
|
||||
try (AdminClient adminClient = AdminClient.create(this.adminClientProperties)) {
|
||||
try (AdminClient adminClient = createAdminClient()) {
|
||||
createTopic(adminClient, name, properties.getPartitionCount(), false,
|
||||
properties.getExtension().getTopic());
|
||||
int partitions = 0;
|
||||
@@ -303,9 +310,10 @@ public class KafkaTopicProvisioner implements
|
||||
? partitions
|
||||
: properties.getExtension().getDlqPartitions();
|
||||
try {
|
||||
final KafkaProducerProperties dlqProducerProperties = properties.getExtension().getDlqProducerProperties();
|
||||
createTopicAndPartitions(adminClient, dlqTopic, dlqPartitions,
|
||||
properties.getExtension().isAutoRebalanceEnabled(),
|
||||
properties.getExtension().getTopic());
|
||||
dlqProducerProperties.getTopic());
|
||||
}
|
||||
catch (Throwable throwable) {
|
||||
if (throwable instanceof Error) {
|
||||
|
||||
@@ -42,6 +42,8 @@ import static org.assertj.core.api.Assertions.fail;
|
||||
*/
|
||||
public class KafkaTopicProvisionerTests {
|
||||
|
||||
AdminClientConfigCustomizer adminClientConfigCustomizer = adminClientProperties -> adminClientProperties.put("foo", "bar");
|
||||
|
||||
@SuppressWarnings("rawtypes")
|
||||
@Test
|
||||
public void bootPropertiesOverriddenExceptServers() throws Exception {
|
||||
@@ -58,7 +60,7 @@ public class KafkaTopicProvisionerTests {
|
||||
ts.getFile().getAbsolutePath());
|
||||
binderConfig.setBrokers("localhost:9092");
|
||||
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
|
||||
bootConfig);
|
||||
bootConfig, adminClientConfigCustomizer);
|
||||
AdminClient adminClient = provisioner.createAdminClient();
|
||||
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
|
||||
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
|
||||
@@ -67,6 +69,7 @@ public class KafkaTopicProvisionerTests {
|
||||
assertThat(
|
||||
((List) configs.get(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)).get(0))
|
||||
.isEqualTo("localhost:1234");
|
||||
assertThat(configs.get("foo")).isEqualTo("bar");
|
||||
adminClient.close();
|
||||
}
|
||||
|
||||
@@ -86,7 +89,7 @@ public class KafkaTopicProvisionerTests {
|
||||
ts.getFile().getAbsolutePath());
|
||||
binderConfig.setBrokers("localhost:1234");
|
||||
KafkaTopicProvisioner provisioner = new KafkaTopicProvisioner(binderConfig,
|
||||
bootConfig);
|
||||
bootConfig, adminClientConfigCustomizer);
|
||||
AdminClient adminClient = provisioner.createAdminClient();
|
||||
assertThat(KafkaTestUtils.getPropertyValue(adminClient,
|
||||
"client.selector.channelBuilder")).isInstanceOf(SslChannelBuilder.class);
|
||||
@@ -106,7 +109,7 @@ public class KafkaTopicProvisionerTests {
|
||||
binderConfig.getConfiguration().put(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
|
||||
"localhost:1234");
|
||||
try {
|
||||
new KafkaTopicProvisioner(binderConfig, bootConfig);
|
||||
new KafkaTopicProvisioner(binderConfig, bootConfig, adminClientConfigCustomizer);
|
||||
fail("Expected illegal state");
|
||||
}
|
||||
catch (IllegalStateException e) {
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
</parent>
|
||||
|
||||
<properties>
|
||||
|
||||
@@ -160,7 +160,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
(KTableBoundElementFactory.KTableWrapper) targetBean;
|
||||
//wrap the proxy created during the initial target type binding with real object (KTable)
|
||||
kTableWrapper.wrap((KTable<Object, Object>) table);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
arguments[index] = table;
|
||||
}
|
||||
else if (parameterType.isAssignableFrom(GlobalKTable.class)) {
|
||||
@@ -172,7 +172,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
(GlobalKTableBoundElementFactory.GlobalKTableWrapper) targetBean;
|
||||
//wrap the proxy created during the initial target type binding with real object (KTable)
|
||||
globalKTableWrapper.wrap((GlobalKTable<Object, Object>) table);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
arguments[index] = table;
|
||||
}
|
||||
}
|
||||
@@ -314,6 +314,11 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
streamConfiguration.put(RecoveringDeserializationExceptionHandler.KSTREAM_DESERIALIZATION_RECOVERER,
|
||||
applicationContext.getBean(SendToDlqAndContinue.class));
|
||||
}
|
||||
else if (deserializationExceptionHandler == DeserializationExceptionHandler.skipAndContinue) {
|
||||
streamConfiguration.put(
|
||||
StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,
|
||||
SkipAndContinueExceptionHandler.class);
|
||||
}
|
||||
|
||||
KafkaStreamsConfiguration kafkaStreamsConfiguration = new KafkaStreamsConfiguration(streamConfiguration);
|
||||
|
||||
@@ -420,17 +425,19 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
}
|
||||
|
||||
KStream<?, ?> stream;
|
||||
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
|
||||
new Serdes.BytesSerde() : valueSerde;
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
|
||||
|
||||
if (this.kafkaStreamsExtendedBindingProperties
|
||||
.getExtendedConsumerProperties(inboundName).isDestinationIsPattern()) {
|
||||
final Pattern pattern = Pattern.compile(this.bindingServiceProperties.getBindingDestination(inboundName));
|
||||
stream = streamsBuilder.stream(pattern);
|
||||
|
||||
stream = streamsBuilder.stream(pattern, consumed);
|
||||
}
|
||||
else {
|
||||
String[] bindingTargets = StringUtils.commaDelimitedListToStringArray(
|
||||
this.bindingServiceProperties.getBindingDestination(inboundName));
|
||||
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
|
||||
new Serdes.BytesSerde() : valueSerde;
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
|
||||
stream = streamsBuilder.stream(Arrays.asList(bindingTargets),
|
||||
consumed);
|
||||
}
|
||||
@@ -439,36 +446,7 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
|
||||
AtomicBoolean matched = new AtomicBoolean();
|
||||
// Processor to retrieve the header value.
|
||||
stream.process(() -> new Processor() {
|
||||
|
||||
ProcessorContext context;
|
||||
|
||||
@Override
|
||||
public void init(ProcessorContext context) {
|
||||
this.context = context;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void process(Object key, Object value) {
|
||||
final Headers headers = this.context.headers();
|
||||
final Iterable<Header> eventTypeHeader = headers.headers(kafkaStreamsConsumerProperties.getEventTypeHeaderKey());
|
||||
if (eventTypeHeader != null && eventTypeHeader.iterator().hasNext()) {
|
||||
String eventTypeFromHeader = new String(eventTypeHeader.iterator().next().value());
|
||||
final String[] eventTypesFromBinding = StringUtils.commaDelimitedListToStringArray(kafkaStreamsConsumerProperties.getEventTypes());
|
||||
for (String eventTypeFromBinding : eventTypesFromBinding) {
|
||||
if (eventTypeFromHeader.equals(eventTypeFromBinding)) {
|
||||
matched.set(true);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
|
||||
}
|
||||
});
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched));
|
||||
// Branching based on event type match.
|
||||
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
|
||||
// Deserialize if we have a branch from above.
|
||||
@@ -554,12 +532,31 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
StreamsBuilder streamsBuilder, Serde<?> keySerde,
|
||||
Serde<?> valueSerde, String materializedAs, String bindingDestination,
|
||||
Topology.AutoOffsetReset autoOffsetReset) {
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerde, autoOffsetReset);
|
||||
return materializedAs != null
|
||||
|
||||
final Serde<?> valueSerdeToUse = StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes()) ?
|
||||
new Serdes.BytesSerde() : valueSerde;
|
||||
|
||||
final Consumed<?, ?> consumed = getConsumed(kafkaStreamsConsumerProperties, keySerde, valueSerdeToUse, autoOffsetReset);
|
||||
|
||||
final KTable<?, ?> kTable = materializedAs != null
|
||||
? materializedAs(streamsBuilder, bindingDestination, materializedAs,
|
||||
keySerde, valueSerde, autoOffsetReset, kafkaStreamsConsumerProperties)
|
||||
keySerde, valueSerdeToUse, autoOffsetReset, kafkaStreamsConsumerProperties)
|
||||
: streamsBuilder.table(bindingDestination,
|
||||
consumed);
|
||||
if (StringUtils.hasText(kafkaStreamsConsumerProperties.getEventTypes())) {
|
||||
AtomicBoolean matched = new AtomicBoolean();
|
||||
final KStream<?, ?> stream = kTable.toStream();
|
||||
|
||||
// Processor to retrieve the header value.
|
||||
stream.process(() -> eventTypeProcessor(kafkaStreamsConsumerProperties, matched));
|
||||
// Branching based on event type match.
|
||||
final KStream<?, ?>[] branch = stream.branch((key, value) -> matched.getAndSet(false));
|
||||
// Deserialize if we have a branch from above.
|
||||
final KStream<?, Object> deserializedKStream = branch[0].mapValues(value -> valueSerde.deserializer().deserialize(null, ((Bytes) value).get()));
|
||||
|
||||
return deserializedKStream.toTable();
|
||||
}
|
||||
return kTable;
|
||||
}
|
||||
|
||||
private <K, V> Consumed<K, V> getConsumed(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties,
|
||||
@@ -576,4 +573,37 @@ public abstract class AbstractKafkaStreamsBinderProcessor implements Application
|
||||
}
|
||||
return consumed;
|
||||
}
|
||||
|
||||
private <K, V> Processor<K, V> eventTypeProcessor(KafkaStreamsConsumerProperties kafkaStreamsConsumerProperties, AtomicBoolean matched) {
|
||||
return new Processor() {
|
||||
|
||||
ProcessorContext context;
|
||||
|
||||
@Override
|
||||
public void init(ProcessorContext context) {
|
||||
this.context = context;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void process(Object key, Object value) {
|
||||
final Headers headers = this.context.headers();
|
||||
final Iterable<Header> eventTypeHeader = headers.headers(kafkaStreamsConsumerProperties.getEventTypeHeaderKey());
|
||||
if (eventTypeHeader != null && eventTypeHeader.iterator().hasNext()) {
|
||||
String eventTypeFromHeader = new String(eventTypeHeader.iterator().next().value());
|
||||
final String[] eventTypesFromBinding = StringUtils.commaDelimitedListToStringArray(kafkaStreamsConsumerProperties.getEventTypes());
|
||||
for (String eventTypeFromBinding : eventTypesFromBinding) {
|
||||
if (eventTypeFromHeader.equals(eventTypeFromBinding)) {
|
||||
matched.set(true);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void close() {
|
||||
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -38,6 +38,10 @@ public enum DeserializationExceptionHandler {
|
||||
* Deserialization error handler with DLQ send.
|
||||
* See {@link org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler}
|
||||
*/
|
||||
sendToDlq
|
||||
sendToDlq,
|
||||
/**
|
||||
* Deserialization error handler that silently skips the error and continue.
|
||||
*/
|
||||
skipAndContinue;
|
||||
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -30,6 +30,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.retry.support.RetryTemplate;
|
||||
import org.springframework.util.StringUtils;
|
||||
|
||||
@@ -78,12 +79,30 @@ public class GlobalKTableBinder extends
|
||||
group = properties.getExtension().getApplicationId();
|
||||
}
|
||||
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
|
||||
|
||||
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
|
||||
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
|
||||
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
|
||||
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
|
||||
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
return new DefaultBinding<GlobalKTable<Object, Object>>(bindingName, group, inputTarget, streamsBuilderFactoryBean) {
|
||||
|
||||
@Override
|
||||
public boolean isInput() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void stop() {
|
||||
super.stop();
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,11 +18,13 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -39,15 +41,15 @@ import org.springframework.context.annotation.Import;
|
||||
*/
|
||||
@Configuration
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class,
|
||||
MultiBinderPropertiesConfiguration.class})
|
||||
MultiBinderPropertiesConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class})
|
||||
public class GlobalKTableBinderConfiguration {
|
||||
|
||||
@Bean
|
||||
public KafkaTopicProvisioner provisioningProvider(
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties) {
|
||||
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
|
||||
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
|
||||
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -81,6 +83,9 @@ public class GlobalKTableBinderConfiguration {
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsRegistry.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsRegistry.class));
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -109,7 +109,7 @@ public class InteractiveQueryService {
|
||||
if (store != null) {
|
||||
return store;
|
||||
}
|
||||
throw new IllegalStateException("Error when retrieving state store: j " + storeName, throwable);
|
||||
throw new IllegalStateException("Error when retrieving state store: " + storeName, throwable);
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2017-2020 the original author or authors.
|
||||
* Copyright 2017-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,8 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Properties;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
@@ -38,6 +40,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.retry.support.RetryTemplate;
|
||||
import org.springframework.util.StringUtils;
|
||||
|
||||
@@ -76,10 +79,10 @@ class KStreamBinder extends
|
||||
private final KeyValueSerdeResolver keyValueSerdeResolver;
|
||||
|
||||
KStreamBinder(KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KeyValueSerdeResolver keyValueSerdeResolver) {
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner,
|
||||
KafkaStreamsMessageConversionDelegate kafkaStreamsMessageConversionDelegate,
|
||||
KafkaStreamsBindingInformationCatalogue KafkaStreamsBindingInformationCatalogue,
|
||||
KeyValueSerdeResolver keyValueSerdeResolver) {
|
||||
this.binderConfigurationProperties = binderConfigurationProperties;
|
||||
this.kafkaTopicProvisioner = kafkaTopicProvisioner;
|
||||
this.kafkaStreamsMessageConversionDelegate = kafkaStreamsMessageConversionDelegate;
|
||||
@@ -103,11 +106,31 @@ class KStreamBinder extends
|
||||
|
||||
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
|
||||
|
||||
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
|
||||
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(), this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
|
||||
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
|
||||
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
|
||||
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
|
||||
return new DefaultBinding<KStream<Object, Object>>(bindingName, group,
|
||||
inputTarget, streamsBuilderFactoryBean) {
|
||||
|
||||
@Override
|
||||
public boolean isInput() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void stop() {
|
||||
super.stop();
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -136,7 +159,31 @@ class KStreamBinder extends
|
||||
|
||||
to(properties.isUseNativeEncoding(), name, outboundBindTarget,
|
||||
(Serde<Object>) keySerde, (Serde<Object>) valueSerde, properties.getExtension());
|
||||
return new DefaultBinding<>(name, null, outboundBindTarget, null);
|
||||
|
||||
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(outboundBindTarget);
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
|
||||
|
||||
// We need the application id to pass to DefaultBinding so that it won't be interpreted as an anonymous group.
|
||||
// In case, if we can't find application.id (which is unlikely), we just default to bindingName.
|
||||
// This will only be used for lifecycle management through actuator endpoints.
|
||||
final Properties streamsConfiguration = streamsBuilderFactoryBean.getStreamsConfiguration();
|
||||
final String applicationId = streamsConfiguration != null ? (String) streamsConfiguration.get("application.id") : bindingName;
|
||||
|
||||
return new DefaultBinding<KStream<Object, Object>>(bindingName,
|
||||
applicationId, outboundBindTarget, streamsBuilderFactoryBean) {
|
||||
|
||||
@Override
|
||||
public boolean isInput() {
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void stop() {
|
||||
super.stop();
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2017-2018 the original author or authors.
|
||||
* Copyright 2017-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,10 +16,12 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -37,16 +39,16 @@ import org.springframework.context.annotation.Import;
|
||||
*/
|
||||
@Configuration
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class,
|
||||
MultiBinderPropertiesConfiguration.class})
|
||||
MultiBinderPropertiesConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class})
|
||||
public class KStreamBinderConfiguration {
|
||||
|
||||
@Bean
|
||||
public KafkaTopicProvisioner provisioningProvider(
|
||||
KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties) {
|
||||
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
|
||||
return new KafkaTopicProvisioner(kafkaStreamsBinderConfigurationProperties,
|
||||
kafkaProperties);
|
||||
kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -86,6 +88,9 @@ public class KStreamBinderConfiguration {
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsExtendedBindingProperties.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsExtendedBindingProperties.class));
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsRegistry.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsRegistry.class));
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -30,6 +30,7 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsConsumerProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsProducerProperties;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.retry.support.RetryTemplate;
|
||||
import org.springframework.util.StringUtils;
|
||||
|
||||
@@ -81,11 +82,30 @@ class KTableBinder extends
|
||||
}
|
||||
|
||||
final RetryTemplate retryTemplate = buildRetryTemplate(properties);
|
||||
|
||||
final String bindingName = this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget);
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(bindingName);
|
||||
|
||||
KafkaStreamsBinderUtils.prepareConsumerBinding(name, group,
|
||||
getApplicationContext(), this.kafkaTopicProvisioner,
|
||||
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(), this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget));
|
||||
this.binderConfigurationProperties, properties, retryTemplate, getBeanFactory(),
|
||||
this.kafkaStreamsBindingInformationCatalogue.bindingNamePerTarget(inputTarget),
|
||||
this.kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
|
||||
return new DefaultBinding<>(name, group, inputTarget, null);
|
||||
return new DefaultBinding<KTable<Object, Object>>(bindingName, group, inputTarget, streamsBuilderFactoryBean) {
|
||||
|
||||
@Override
|
||||
public boolean isInput() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void stop() {
|
||||
super.stop();
|
||||
KafkaStreamsBinderUtils.closeDlqProducerFactories(kafkaStreamsBindingInformationCatalogue, streamsBuilderFactoryBean);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,11 +18,13 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.beans.factory.config.BeanFactoryPostProcessor;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsExtendedBindingProperties;
|
||||
@@ -39,15 +41,15 @@ import org.springframework.context.annotation.Import;
|
||||
@SuppressWarnings("ALL")
|
||||
@Configuration
|
||||
@Import({ KafkaAutoConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class,
|
||||
MultiBinderPropertiesConfiguration.class})
|
||||
MultiBinderPropertiesConfiguration.class,
|
||||
KafkaStreamsBinderHealthIndicatorConfiguration.class})
|
||||
public class KTableBinderConfiguration {
|
||||
|
||||
@Bean
|
||||
public KafkaTopicProvisioner provisioningProvider(
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties) {
|
||||
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties);
|
||||
KafkaProperties kafkaProperties, ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
|
||||
return new KafkaTopicProvisioner(binderConfigurationProperties, kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
}
|
||||
|
||||
@Bean
|
||||
@@ -79,6 +81,9 @@ public class KTableBinderConfiguration {
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsBindingInformationCatalogue.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsBindingInformationCatalogue.class));
|
||||
beanFactory.registerSingleton(
|
||||
KafkaStreamsRegistry.class.getSimpleName(),
|
||||
outerContext.getBean(KafkaStreamsRegistry.class));
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2020 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.time.Duration;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
@@ -26,8 +27,6 @@ import java.util.concurrent.locks.Lock;
|
||||
import java.util.concurrent.locks.ReentrantLock;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.apache.commons.logging.Log;
|
||||
import org.apache.commons.logging.LogFactory;
|
||||
import org.apache.kafka.clients.admin.AdminClient;
|
||||
import org.apache.kafka.clients.admin.ListTopicsResult;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
@@ -52,17 +51,38 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
*/
|
||||
public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator implements DisposableBean {
|
||||
|
||||
private final Log logger = LogFactory.getLog(getClass());
|
||||
/**
|
||||
* Static initialization for detecting whether the application is using Kafka client 2.5 vs lower versions.
|
||||
*/
|
||||
private static ClassLoader CLASS_LOADER = KafkaStreamsBinderHealthIndicator.class.getClassLoader();
|
||||
private static boolean isKafkaStreams25 = true;
|
||||
private static Method methodForIsRunning;
|
||||
|
||||
static {
|
||||
try {
|
||||
Class<?> KAFKA_STREAMS_STATE_CLASS = CLASS_LOADER.loadClass("org.apache.kafka.streams.KafkaStreams$State");
|
||||
|
||||
Method[] declaredMethods = KAFKA_STREAMS_STATE_CLASS.getDeclaredMethods();
|
||||
for (Method m : declaredMethods) {
|
||||
if (m.getName().equals("isRunning")) {
|
||||
isKafkaStreams25 = false;
|
||||
methodForIsRunning = m;
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (ClassNotFoundException e) {
|
||||
throw new IllegalStateException("KafkaStreams$State class not found", e);
|
||||
}
|
||||
}
|
||||
|
||||
private final KafkaStreamsRegistry kafkaStreamsRegistry;
|
||||
|
||||
private final KafkaStreamsBinderConfigurationProperties configurationProperties;
|
||||
|
||||
private final Map<String, Object> adminClientProperties;
|
||||
|
||||
private final KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue;
|
||||
|
||||
private static final ThreadLocal<Status> healthStatusThreadLocal = new ThreadLocal<>();
|
||||
|
||||
private AdminClient adminClient;
|
||||
|
||||
private final Lock lock = new ReentrantLock();
|
||||
@@ -88,50 +108,54 @@ public class KafkaStreamsBinderHealthIndicator extends AbstractHealthIndicator i
|
||||
if (this.adminClient == null) {
|
||||
this.adminClient = AdminClient.create(this.adminClientProperties);
|
||||
}
|
||||
final Status status = healthStatusThreadLocal.get();
|
||||
//If one of the kafka streams binders (kstream, ktable, globalktable) was down before on the same request,
|
||||
//retrieve that from the thead local storage where it was saved before. This is done in order to avoid
|
||||
//the duration of the total health check since in the case of Kafka Streams each binder tries to do
|
||||
//its own health check and since we already know that this is DOWN, simply pass that information along.
|
||||
if (status == Status.DOWN) {
|
||||
builder.withDetail("No topic information available", "Kafka broker is not reachable");
|
||||
builder.status(Status.DOWN);
|
||||
|
||||
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
|
||||
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
|
||||
|
||||
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
|
||||
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
|
||||
builder.status(Status.UNKNOWN);
|
||||
}
|
||||
else {
|
||||
final ListTopicsResult listTopicsResult = this.adminClient.listTopics();
|
||||
listTopicsResult.listings().get(this.configurationProperties.getHealthTimeout(), TimeUnit.SECONDS);
|
||||
|
||||
if (this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeans().isEmpty()) {
|
||||
builder.withDetail("No Kafka Streams bindings have been established", "Kafka Streams binder did not detect any processors");
|
||||
builder.status(Status.UNKNOWN);
|
||||
}
|
||||
else {
|
||||
boolean up = true;
|
||||
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
boolean up = true;
|
||||
for (KafkaStreams kStream : kafkaStreamsRegistry.getKafkaStreams()) {
|
||||
if (isKafkaStreams25) {
|
||||
up &= kStream.state().isRunningOrRebalancing();
|
||||
builder.withDetails(buildDetails(kStream));
|
||||
}
|
||||
builder.status(up ? Status.UP : Status.DOWN);
|
||||
else {
|
||||
// if Kafka client version is lower than 2.5, then call the method reflectively.
|
||||
final boolean isRunningInvokedResult = (boolean) methodForIsRunning.invoke(kStream.state());
|
||||
up &= isRunningInvokedResult;
|
||||
}
|
||||
builder.withDetails(buildDetails(kStream));
|
||||
}
|
||||
builder.status(up ? Status.UP : Status.DOWN);
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
builder.withDetail("No topic information available", "Kafka broker is not reachable");
|
||||
builder.status(Status.DOWN);
|
||||
builder.withException(e);
|
||||
//Store binder down status into a thread local storage.
|
||||
healthStatusThreadLocal.set(Status.DOWN);
|
||||
}
|
||||
finally {
|
||||
this.lock.unlock();
|
||||
}
|
||||
}
|
||||
|
||||
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) {
|
||||
private Map<String, Object> buildDetails(KafkaStreams kafkaStreams) throws Exception {
|
||||
final Map<String, Object> details = new HashMap<>();
|
||||
final Map<String, Object> perAppdIdDetails = new HashMap<>();
|
||||
|
||||
if (kafkaStreams.state().isRunningOrRebalancing()) {
|
||||
boolean isRunningResult;
|
||||
if (isKafkaStreams25) {
|
||||
isRunningResult = kafkaStreams.state().isRunningOrRebalancing();
|
||||
}
|
||||
else {
|
||||
// if Kafka client version is lower than 2.5, then call the method reflectively.
|
||||
isRunningResult = (boolean) methodForIsRunning.invoke(kafkaStreams.state());
|
||||
}
|
||||
|
||||
if (isRunningResult) {
|
||||
for (ThreadMetadata metadata : kafkaStreams.localThreadsMetadata()) {
|
||||
perAppdIdDetails.put("threadName", metadata.threadName());
|
||||
perAppdIdDetails.put("threadState", metadata.threadState());
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,9 +16,9 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.springframework.beans.factory.ObjectProvider;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.boot.actuate.autoconfigure.health.ConditionalOnEnabledHealthIndicator;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnBean;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnClass;
|
||||
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStreamsBinderConfigurationProperties;
|
||||
@@ -33,16 +33,18 @@ import org.springframework.context.annotation.Configuration;
|
||||
@Configuration
|
||||
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
|
||||
@ConditionalOnEnabledHealthIndicator("binders")
|
||||
class KafkaStreamsBinderHealthIndicatorConfiguration {
|
||||
public class KafkaStreamsBinderHealthIndicatorConfiguration {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnBean(KafkaStreamsRegistry.class)
|
||||
KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry, @Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
public KafkaStreamsBinderHealthIndicator kafkaStreamsBinderHealthIndicator(
|
||||
ObjectProvider<KafkaStreamsRegistry> kafkaStreamsRegistry,
|
||||
@Qualifier("binderConfigurationProperties")KafkaStreamsBinderConfigurationProperties kafkaStreamsBinderConfigurationProperties,
|
||||
KafkaProperties kafkaProperties, KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue) {
|
||||
|
||||
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry, kafkaStreamsBinderConfigurationProperties,
|
||||
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
|
||||
if (kafkaStreamsRegistry.getIfUnique() != null) {
|
||||
return new KafkaStreamsBinderHealthIndicator(kafkaStreamsRegistry.getIfUnique(), kafkaStreamsBinderConfigurationProperties,
|
||||
kafkaProperties, kafkaStreamsBindingInformationCatalogue);
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -92,11 +92,13 @@ public class KafkaStreamsBinderMetrics {
|
||||
this.meterBinder = registry -> {
|
||||
if (streamsBuilderFactoryBeans != null) {
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
|
||||
if (streamsBuilderFactoryBean.isRunning()) {
|
||||
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
final Map<MetricName, ? extends Metric> metrics = kafkaStreams.metrics();
|
||||
|
||||
prepareToBindMetrics(registry, metrics);
|
||||
checkAndBindMetrics(registry, metrics);
|
||||
prepareToBindMetrics(registry, metrics);
|
||||
checkAndBindMetrics(registry, metrics);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
@@ -17,7 +17,6 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.lang.reflect.Constructor;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
@@ -25,12 +24,8 @@ import java.util.Map;
|
||||
import java.util.Properties;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import io.micrometer.core.instrument.ImmutableTag;
|
||||
import io.micrometer.core.instrument.MeterRegistry;
|
||||
import io.micrometer.core.instrument.Tag;
|
||||
import io.micrometer.core.instrument.binder.kafka.KafkaStreamsMetrics;
|
||||
import org.apache.kafka.common.serialization.Serdes;
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
import org.apache.kafka.streams.errors.LogAndContinueExceptionHandler;
|
||||
import org.apache.kafka.streams.errors.LogAndFailExceptionHandler;
|
||||
@@ -76,6 +71,7 @@ import org.springframework.integration.support.utils.IntegrationUtils;
|
||||
import org.springframework.kafka.config.KafkaStreamsConfiguration;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.CleanupConfig;
|
||||
import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
|
||||
import org.springframework.kafka.streams.RecoveringDeserializationExceptionHandler;
|
||||
import org.springframework.lang.Nullable;
|
||||
import org.springframework.messaging.converter.CompositeMessageConverter;
|
||||
@@ -402,7 +398,7 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
KafkaStreamsBindingInformationCatalogue catalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
@Nullable KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
|
||||
@Nullable StreamsListener listener) {
|
||||
@Nullable KafkaStreamsMicrometerListener listener) {
|
||||
return new StreamsBuilderFactoryManager(catalogue, kafkaStreamsRegistry, kafkaStreamsBinderMetrics, listener);
|
||||
}
|
||||
|
||||
@@ -448,33 +444,10 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnMissingBean(name = "binderStreamsListener")
|
||||
public StreamsListener binderStreamsListener(MeterRegistry meterRegistry) {
|
||||
return new StreamsListener() {
|
||||
|
||||
private final Map<String, KafkaStreamsMetrics> metrics = new HashMap<>();
|
||||
|
||||
@Override
|
||||
public synchronized void streamsAdded(String id, KafkaStreams kafkaStreams) {
|
||||
if (!this.metrics.containsKey(id)) {
|
||||
List<Tag> streamsTags = new ArrayList<>();
|
||||
streamsTags.add(new ImmutableTag("spring.id", id));
|
||||
this.metrics.put(id, new KafkaStreamsMetrics(kafkaStreams, streamsTags));
|
||||
this.metrics.get(id).bindTo(meterRegistry);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void streamsRemoved(String id, KafkaStreams streams) {
|
||||
KafkaStreamsMetrics removed = this.metrics.remove(id);
|
||||
if (removed != null) {
|
||||
removed.close();
|
||||
}
|
||||
}
|
||||
|
||||
};
|
||||
public KafkaStreamsMicrometerListener binderStreamsListener(MeterRegistry meterRegistry) {
|
||||
return new KafkaStreamsMicrometerListener(meterRegistry);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@Configuration
|
||||
@@ -498,34 +471,9 @@ public class KafkaStreamsBinderSupportAutoConfiguration {
|
||||
|
||||
@Bean
|
||||
@ConditionalOnMissingBean(name = "binderStreamsListener")
|
||||
public StreamsListener binderStreamsListener(ConfigurableApplicationContext context) {
|
||||
MeterRegistry meterRegistry = context.getBean("outerContext", ApplicationContext.class)
|
||||
.getBean(MeterRegistry.class);
|
||||
return new StreamsListener() {
|
||||
|
||||
private final Map<String, KafkaStreamsMetrics> metrics = new HashMap<>();
|
||||
|
||||
@Override
|
||||
public synchronized void streamsAdded(String id, KafkaStreams kafkaStreams) {
|
||||
if (!this.metrics.containsKey(id)) {
|
||||
List<Tag> streamsTags = new ArrayList<>();
|
||||
streamsTags.add(new ImmutableTag("spring.id", id));
|
||||
this.metrics.put(id, new KafkaStreamsMetrics(kafkaStreams, streamsTags));
|
||||
this.metrics.get(id).bindTo(meterRegistry);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void streamsRemoved(String id, KafkaStreams streams) {
|
||||
KafkaStreamsMetrics removed = this.metrics.remove(id);
|
||||
if (removed != null) {
|
||||
removed.close();
|
||||
}
|
||||
}
|
||||
|
||||
};
|
||||
public KafkaStreamsMicrometerListener binderStreamsListener(MeterRegistry meterRegistry) {
|
||||
return new KafkaStreamsMicrometerListener(meterRegistry);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -17,6 +17,7 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.function.BiFunction;
|
||||
|
||||
@@ -28,6 +29,7 @@ import org.apache.kafka.common.TopicPartition;
|
||||
import org.apache.kafka.common.serialization.ByteArraySerializer;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
|
||||
import org.springframework.beans.factory.DisposableBean;
|
||||
import org.springframework.beans.factory.config.BeanDefinition;
|
||||
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
|
||||
import org.springframework.beans.factory.support.BeanDefinitionBuilder;
|
||||
@@ -44,6 +46,7 @@ import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolve
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.context.ApplicationContext;
|
||||
import org.springframework.core.MethodParameter;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaOperations;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
@@ -51,6 +54,7 @@ import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.listener.DeadLetterPublishingRecoverer;
|
||||
import org.springframework.retry.support.RetryTemplate;
|
||||
import org.springframework.util.Assert;
|
||||
import org.springframework.util.CollectionUtils;
|
||||
import org.springframework.util.ObjectUtils;
|
||||
import org.springframework.util.StringUtils;
|
||||
|
||||
@@ -73,7 +77,9 @@ final class KafkaStreamsBinderUtils {
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties,
|
||||
ExtendedConsumerProperties<KafkaStreamsConsumerProperties> properties,
|
||||
RetryTemplate retryTemplate,
|
||||
ConfigurableListableBeanFactory beanFactory, String bindingName) {
|
||||
ConfigurableListableBeanFactory beanFactory, String bindingName,
|
||||
KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties =
|
||||
(ExtendedConsumerProperties) properties;
|
||||
@@ -109,6 +115,8 @@ final class KafkaStreamsBinderUtils {
|
||||
new ExtendedProducerProperties<>(
|
||||
extendedConsumerProperties.getExtension().getDlqProducerProperties()),
|
||||
binderConfigurationProperties);
|
||||
kafkaStreamsBindingInformationCatalogue.addDlqProducerFactory(streamsBuilderFactoryBean, producerFactory);
|
||||
|
||||
KafkaOperations<byte[], byte[]> kafkaTemplate = new KafkaTemplate<>(producerFactory);
|
||||
|
||||
Map<String, DlqDestinationResolver> dlqDestinationResolvers =
|
||||
@@ -198,4 +206,23 @@ final class KafkaStreamsBinderUtils {
|
||||
return KStream.class.isAssignableFrom(targetBeanClass)
|
||||
&& KStream.class.isAssignableFrom(methodParameter.getParameterType());
|
||||
}
|
||||
|
||||
static void closeDlqProducerFactories(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
|
||||
final List<ProducerFactory<byte[], byte[]>> dlqProducerFactories =
|
||||
kafkaStreamsBindingInformationCatalogue.getDlqProducerFactory(streamsBuilderFactoryBean);
|
||||
|
||||
if (!CollectionUtils.isEmpty(dlqProducerFactories)) {
|
||||
for (ProducerFactory<byte[], byte[]> producerFactory : dlqProducerFactories) {
|
||||
try {
|
||||
((DisposableBean) producerFactory).destroy();
|
||||
}
|
||||
catch (Exception exception) {
|
||||
throw new IllegalStateException(exception);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2020 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,11 +16,14 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.apache.kafka.common.serialization.Serde;
|
||||
import org.apache.kafka.streams.StreamsConfig;
|
||||
@@ -31,6 +34,8 @@ import org.springframework.cloud.stream.binder.kafka.streams.properties.KafkaStr
|
||||
import org.springframework.cloud.stream.config.BindingProperties;
|
||||
import org.springframework.core.ResolvableType;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.util.CollectionUtils;
|
||||
|
||||
/**
|
||||
* A catalogue that provides binding information for Kafka Streams target types such as
|
||||
@@ -41,13 +46,13 @@ import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
*
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
class KafkaStreamsBindingInformationCatalogue {
|
||||
public class KafkaStreamsBindingInformationCatalogue {
|
||||
|
||||
private final Map<KStream<?, ?>, BindingProperties> bindingProperties = new ConcurrentHashMap<>();
|
||||
|
||||
private final Map<KStream<?, ?>, KafkaStreamsConsumerProperties> consumerProperties = new ConcurrentHashMap<>();
|
||||
|
||||
private final Set<StreamsBuilderFactoryBean> streamsBuilderFactoryBeans = new HashSet<>();
|
||||
private final Map<String, StreamsBuilderFactoryBean> streamsBuilderFactoryBeanPerBinding = new HashMap<>();
|
||||
|
||||
private final Map<Object, ResolvableType> outboundKStreamResolvables = new HashMap<>();
|
||||
|
||||
@@ -55,6 +60,8 @@ class KafkaStreamsBindingInformationCatalogue {
|
||||
|
||||
private final Map<Object, String> bindingNamesPerTarget = new HashMap<>();
|
||||
|
||||
private final Map<StreamsBuilderFactoryBean, List<ProducerFactory<byte[], byte[]>>> dlqProducerFactories = new HashMap<>();
|
||||
|
||||
/**
|
||||
* For a given bounded {@link KStream}, retrieve it's corresponding destination on the
|
||||
* broker.
|
||||
@@ -122,19 +129,19 @@ class KafkaStreamsBindingInformationCatalogue {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a mapping for KStream -> {@link StreamsBuilderFactoryBean}.
|
||||
* @param streamsBuilderFactoryBean provides the {@link StreamsBuilderFactoryBean}
|
||||
* mapped to the KStream
|
||||
*/
|
||||
void addStreamBuilderFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
this.streamsBuilderFactoryBeans.add(streamsBuilderFactoryBean);
|
||||
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
|
||||
return new HashSet<>(this.streamsBuilderFactoryBeanPerBinding.values());
|
||||
}
|
||||
|
||||
Set<StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeans() {
|
||||
return this.streamsBuilderFactoryBeans;
|
||||
void addStreamBuilderFactoryPerBinding(String binding, StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
this.streamsBuilderFactoryBeanPerBinding.put(binding, streamsBuilderFactoryBean);
|
||||
}
|
||||
|
||||
Map<String, StreamsBuilderFactoryBean> getStreamsBuilderFactoryBeanPerBinding() {
|
||||
return this.streamsBuilderFactoryBeanPerBinding;
|
||||
}
|
||||
|
||||
|
||||
void addOutboundKStreamResolvable(Object key, ResolvableType outboundResolvable) {
|
||||
this.outboundKStreamResolvables.put(key, outboundResolvable);
|
||||
}
|
||||
@@ -175,4 +182,25 @@ class KafkaStreamsBindingInformationCatalogue {
|
||||
String bindingNamePerTarget(Object target) {
|
||||
return this.bindingNamesPerTarget.get(target);
|
||||
}
|
||||
|
||||
public List<ProducerFactory<byte[], byte[]>> getDlqProducerFactories() {
|
||||
return this.dlqProducerFactories.values()
|
||||
.stream()
|
||||
.flatMap(List::stream)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
public List<ProducerFactory<byte[], byte[]>> getDlqProducerFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
return this.dlqProducerFactories.get(streamsBuilderFactoryBean);
|
||||
}
|
||||
|
||||
public void addDlqProducerFactory(StreamsBuilderFactoryBean streamsBuilderFactoryBean,
|
||||
ProducerFactory<byte[], byte[]> producerFactory) {
|
||||
List<ProducerFactory<byte[], byte[]>> producerFactories = this.dlqProducerFactories.get(streamsBuilderFactoryBean);
|
||||
if (CollectionUtils.isEmpty(producerFactories)) {
|
||||
producerFactories = new ArrayList<>();
|
||||
this.dlqProducerFactories.put(streamsBuilderFactoryBean, producerFactories);
|
||||
}
|
||||
producerFactories.add(producerFactory);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.Iterator;
|
||||
@@ -107,9 +108,21 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
}
|
||||
|
||||
private Map<String, ResolvableType> buildTypeMap(ResolvableType resolvableType,
|
||||
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
|
||||
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory,
|
||||
Method method, String functionName) {
|
||||
Map<String, ResolvableType> resolvableTypeMap = new LinkedHashMap<>();
|
||||
if (resolvableType != null && resolvableType.getRawClass() != null) {
|
||||
if (method != null) { // Component functional bean.
|
||||
final ResolvableType firstMethodParameter = ResolvableType.forMethodParameter(method, 0);
|
||||
ResolvableType currentOutputGeneric = ResolvableType.forMethodReturnType(method);
|
||||
|
||||
final Set<String> inputs = new LinkedHashSet<>(kafkaStreamsBindableProxyFactory.getInputs());
|
||||
final Iterator<String> iterator = inputs.iterator();
|
||||
populateResolvableTypeMap(firstMethodParameter, resolvableTypeMap, iterator, method, functionName);
|
||||
|
||||
final Class<?> outputRawclass = currentOutputGeneric.getRawClass();
|
||||
traverseReturnTypeForComponentBeans(resolvableTypeMap, currentOutputGeneric, inputs, iterator, outputRawclass);
|
||||
}
|
||||
else if (resolvableType != null && resolvableType.getRawClass() != null) {
|
||||
int inputCount = 1;
|
||||
|
||||
ResolvableType currentOutputGeneric;
|
||||
@@ -129,7 +142,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
|
||||
final Iterator<String> iterator = inputs.iterator();
|
||||
|
||||
popuateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
|
||||
populateResolvableTypeMap(resolvableType, resolvableTypeMap, iterator);
|
||||
|
||||
ResolvableType iterableResType = resolvableType;
|
||||
int i = resolvableType.getRawClass().isAssignableFrom(BiFunction.class) ||
|
||||
@@ -143,7 +156,7 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
iterableResType = iterableResType.getGeneric(1);
|
||||
if (iterableResType.getRawClass() != null &&
|
||||
functionOrConsumerFound(iterableResType)) {
|
||||
popuateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
|
||||
populateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
|
||||
}
|
||||
i++;
|
||||
}
|
||||
@@ -154,12 +167,32 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
return resolvableTypeMap;
|
||||
}
|
||||
|
||||
private void traverseReturnTypeForComponentBeans(Map<String, ResolvableType> resolvableTypeMap, ResolvableType currentOutputGeneric,
|
||||
Set<String> inputs, Iterator<String> iterator, Class<?> outputRawclass) {
|
||||
if (outputRawclass != null && !outputRawclass.equals(Void.TYPE)) {
|
||||
ResolvableType iterableResType = currentOutputGeneric;
|
||||
int i = 1;
|
||||
// Traverse through the return signature.
|
||||
while (i < inputs.size() && iterator.hasNext()) {
|
||||
if (iterableResType.getRawClass() != null &&
|
||||
functionOrConsumerFound(iterableResType)) {
|
||||
populateResolvableTypeMap(iterableResType, resolvableTypeMap, iterator);
|
||||
}
|
||||
iterableResType = iterableResType.getGeneric(1);
|
||||
i++;
|
||||
}
|
||||
if (iterableResType.getRawClass() != null && KStream.class.isAssignableFrom(iterableResType.getRawClass())) {
|
||||
resolvableTypeMap.put(OUTBOUND, iterableResType);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private boolean functionOrConsumerFound(ResolvableType iterableResType) {
|
||||
return iterableResType.getRawClass().equals(Function.class) ||
|
||||
iterableResType.getRawClass().equals(Consumer.class);
|
||||
}
|
||||
|
||||
private void popuateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
|
||||
private void populateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
|
||||
Iterator<String> iterator) {
|
||||
final String next = iterator.next();
|
||||
resolvableTypeMap.put(next, resolvableType.getGeneric(0));
|
||||
@@ -171,6 +204,18 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
}
|
||||
}
|
||||
|
||||
private void populateResolvableTypeMap(ResolvableType resolvableType, Map<String, ResolvableType> resolvableTypeMap,
|
||||
Iterator<String> iterator, Method method, String functionName) {
|
||||
final String next = iterator.next();
|
||||
resolvableTypeMap.put(next, resolvableType);
|
||||
if (method != null) {
|
||||
final Object bean = beanFactory.getBean(functionName);
|
||||
if (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass())) {
|
||||
resolvableTypeMap.put(iterator.next(), ResolvableType.forMethodParameter(method, 1));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* This method must be kept stateless. In the case of multiple function beans in an application,
|
||||
* isolated {@link KafkaStreamsBindableProxyFactory} instances are passed in separately for those functions. If the
|
||||
@@ -183,10 +228,11 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
*/
|
||||
@SuppressWarnings({ "unchecked", "rawtypes" })
|
||||
public void setupFunctionInvokerForKafkaStreams(ResolvableType resolvableType, String functionName,
|
||||
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory) {
|
||||
final Map<String, ResolvableType> stringResolvableTypeMap = buildTypeMap(resolvableType, kafkaStreamsBindableProxyFactory);
|
||||
ResolvableType outboundResolvableType = stringResolvableTypeMap.remove(OUTBOUND);
|
||||
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(stringResolvableTypeMap, functionName);
|
||||
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory, Method method) {
|
||||
final Map<String, ResolvableType> resolvableTypes = buildTypeMap(resolvableType,
|
||||
kafkaStreamsBindableProxyFactory, method, functionName);
|
||||
ResolvableType outboundResolvableType = resolvableTypes.remove(OUTBOUND);
|
||||
Object[] adaptedInboundArguments = adaptAndRetrieveInboundArguments(resolvableTypes, functionName);
|
||||
try {
|
||||
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(Consumer.class)) {
|
||||
Consumer<Object> consumer = (Consumer) this.beanFactory.getBean(functionName);
|
||||
@@ -196,6 +242,49 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
BiConsumer<Object, Object> biConsumer = (BiConsumer) this.beanFactory.getBean(functionName);
|
||||
biConsumer.accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
|
||||
}
|
||||
else if (method != null) { // Handling component functional beans
|
||||
final Object bean = beanFactory.getBean(functionName);
|
||||
if (Consumer.class.isAssignableFrom(bean.getClass())) {
|
||||
((Consumer) bean).accept(adaptedInboundArguments[0]);
|
||||
}
|
||||
else if (BiConsumer.class.isAssignableFrom(bean.getClass())) {
|
||||
((BiConsumer) bean).accept(adaptedInboundArguments[0], adaptedInboundArguments[1]);
|
||||
}
|
||||
else if (Function.class.isAssignableFrom(bean.getClass()) || BiFunction.class.isAssignableFrom(bean.getClass())) {
|
||||
Object result;
|
||||
if (BiFunction.class.isAssignableFrom(bean.getClass())) {
|
||||
result = ((BiFunction) bean).apply(adaptedInboundArguments[0], adaptedInboundArguments[1]);
|
||||
}
|
||||
else {
|
||||
result = ((Function) bean).apply(adaptedInboundArguments[0]);
|
||||
}
|
||||
int i = 1;
|
||||
while (result instanceof Function || result instanceof Consumer) {
|
||||
if (result instanceof Function) {
|
||||
result = ((Function) result).apply(adaptedInboundArguments[i]);
|
||||
}
|
||||
else {
|
||||
((Consumer) result).accept(adaptedInboundArguments[i]);
|
||||
result = null;
|
||||
}
|
||||
i++;
|
||||
}
|
||||
if (result != null) {
|
||||
final Set<String> outputs = new TreeSet<>(kafkaStreamsBindableProxyFactory.getOutputs());
|
||||
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
|
||||
if (result.getClass().isArray()) {
|
||||
final String initialInput = resolvableTypes.keySet().iterator().next();
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean =
|
||||
this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeanPerBinding().get(initialInput);
|
||||
handleKStreamArrayOutbound(resolvableType, functionName, kafkaStreamsBindableProxyFactory,
|
||||
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
|
||||
}
|
||||
else {
|
||||
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType, (KStream) result, outboundDefinitionIterator);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
Object result;
|
||||
if (resolvableType.getRawClass() != null && resolvableType.getRawClass().equals(BiFunction.class)) {
|
||||
@@ -222,45 +311,15 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
final Iterator<String> outboundDefinitionIterator = outputs.iterator();
|
||||
|
||||
if (result.getClass().isArray()) {
|
||||
// Binding target as the output bindings were deferred in the KafkaStreamsBindableProxyFactory
|
||||
// due to the fact that it didn't know the returned array size. At this point in the execution,
|
||||
// we know exactly the number of outbound components (from the array length), so do the binding.
|
||||
final int length = ((Object[]) result).length;
|
||||
|
||||
List<String> outputBindings = getOutputBindings(functionName, length);
|
||||
Iterator<String> iterator = outputBindings.iterator();
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
Object[] outboundKStreams = (Object[]) result;
|
||||
|
||||
for (int ij = 0; ij < length; ij++) {
|
||||
|
||||
String next = iterator.next();
|
||||
kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
|
||||
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
|
||||
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
|
||||
registry.registerBeanDefinition(next, rootBeanDefinition1);
|
||||
|
||||
Object targetBean = this.applicationContext.getBean(next);
|
||||
|
||||
KStreamBoundElementFactory.KStreamWrapper
|
||||
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
boundElement.wrap((KStream) outboundKStreams[ij]);
|
||||
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
|
||||
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
|
||||
}
|
||||
final String initialInput = resolvableTypes.keySet().iterator().next();
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean =
|
||||
this.kafkaStreamsBindingInformationCatalogue.getStreamsBuilderFactoryBeanPerBinding().get(initialInput);
|
||||
handleKStreamArrayOutbound(resolvableType, functionName, kafkaStreamsBindableProxyFactory,
|
||||
outboundResolvableType, (Object[]) result, streamsBuilderFactoryBean);
|
||||
}
|
||||
else {
|
||||
if (outboundDefinitionIterator.hasNext()) {
|
||||
final String next = outboundDefinitionIterator.next();
|
||||
Object targetBean = this.applicationContext.getBean(next);
|
||||
KStreamBoundElementFactory.KStreamWrapper
|
||||
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
boundElement.wrap((KStream) result);
|
||||
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
|
||||
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
|
||||
}
|
||||
handleSingleKStreamOutbound(resolvableTypes, outboundResolvableType != null ?
|
||||
outboundResolvableType : resolvableType.getGeneric(1), (KStream) result, outboundDefinitionIterator);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -270,6 +329,63 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
}
|
||||
}
|
||||
|
||||
private void handleSingleKStreamOutbound(Map<String, ResolvableType> resolvableTypes, ResolvableType outboundResolvableType,
|
||||
KStream<Object, Object> result, Iterator<String> outboundDefinitionIterator) {
|
||||
if (outboundDefinitionIterator.hasNext()) {
|
||||
String outbound = outboundDefinitionIterator.next();
|
||||
Object targetBean = handleSingleKStreamOutbound(result, outbound);
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean,
|
||||
outboundResolvableType);
|
||||
|
||||
final String next = resolvableTypes.keySet().iterator().next();
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(next);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(outbound, streamsBuilderFactoryBean);
|
||||
}
|
||||
}
|
||||
|
||||
private Object handleSingleKStreamOutbound(KStream<Object, Object> result, String next) {
|
||||
Object targetBean = this.applicationContext.getBean(next);
|
||||
KStreamBoundElementFactory.KStreamWrapper
|
||||
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
boundElement.wrap(result);
|
||||
return targetBean;
|
||||
}
|
||||
|
||||
@SuppressWarnings({ "unchecked", "rawtypes" })
|
||||
private void handleKStreamArrayOutbound(ResolvableType resolvableType, String functionName,
|
||||
KafkaStreamsBindableProxyFactory kafkaStreamsBindableProxyFactory,
|
||||
ResolvableType outboundResolvableType, Object[] result,
|
||||
StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
|
||||
// Binding target as the output bindings were deferred in the KafkaStreamsBindableProxyFactory
|
||||
// due to the fact that it didn't know the returned array size. At this point in the execution,
|
||||
// we know exactly the number of outbound components (from the array length), so do the binding.
|
||||
final int length = result.length;
|
||||
|
||||
List<String> outputBindings = getOutputBindings(functionName, length);
|
||||
Iterator<String> iterator = outputBindings.iterator();
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
|
||||
for (Object o : result) {
|
||||
String next = iterator.next();
|
||||
kafkaStreamsBindableProxyFactory.addOutputBinding(next, KStream.class);
|
||||
RootBeanDefinition rootBeanDefinition1 = new RootBeanDefinition();
|
||||
rootBeanDefinition1.setInstanceSupplier(() -> kafkaStreamsBindableProxyFactory.getOutputHolders().get(next).getBoundTarget());
|
||||
registry.registerBeanDefinition(next, rootBeanDefinition1);
|
||||
|
||||
Object targetBean = this.applicationContext.getBean(next);
|
||||
|
||||
KStreamBoundElementFactory.KStreamWrapper
|
||||
boundElement = (KStreamBoundElementFactory.KStreamWrapper) targetBean;
|
||||
boundElement.wrap((KStream) o);
|
||||
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(
|
||||
targetBean, outboundResolvableType != null ? outboundResolvableType : resolvableType.getGeneric(1));
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(next, streamsBuilderFactoryBean);
|
||||
}
|
||||
}
|
||||
|
||||
private List<String> getOutputBindings(String functionName, int outputs) {
|
||||
List<String> outputBindings = this.streamFunctionProperties.getOutputBindings(functionName);
|
||||
List<String> outputBindingNames = new ArrayList<>();
|
||||
@@ -330,7 +446,8 @@ public class KafkaStreamsFunctionProcessor extends AbstractKafkaStreamsBinderPro
|
||||
kStreamWrapper.wrap((KStream<Object, Object>) stream);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addKeySerde((KStream<?, ?>) kStreamWrapper, keySerde);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactory(streamsBuilderFactoryBean);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(input, streamsBuilderFactoryBean);
|
||||
|
||||
if (KStream.class.isAssignableFrom(stringResolvableTypeMap.get(input).getRawClass())) {
|
||||
final Class<?> valueClass =
|
||||
|
||||
@@ -42,7 +42,14 @@ public class KafkaStreamsRegistry {
|
||||
private final Set<KafkaStreams> kafkaStreams = new HashSet<>();
|
||||
|
||||
Set<KafkaStreams> getKafkaStreams() {
|
||||
return this.kafkaStreams;
|
||||
Set<KafkaStreams> currentlyRunningKafkaStreams = new HashSet<>();
|
||||
for (KafkaStreams ks : this.kafkaStreams) {
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = streamsBuilderFactoryBeanMap.get(ks);
|
||||
if (streamsBuilderFactoryBean.isRunning()) {
|
||||
currentlyRunningKafkaStreams.add(ks);
|
||||
}
|
||||
}
|
||||
return currentlyRunningKafkaStreams;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -67,7 +74,7 @@ public class KafkaStreamsRegistry {
|
||||
public StreamsBuilderFactoryBean streamsBuilderFactoryBean(String applicationId) {
|
||||
final Optional<StreamsBuilderFactoryBean> first = this.streamsBuilderFactoryBeanMap.values()
|
||||
.stream()
|
||||
.filter(streamsBuilderFactoryBean -> streamsBuilderFactoryBean
|
||||
.filter(streamsBuilderFactoryBean -> streamsBuilderFactoryBean.isRunning() && streamsBuilderFactoryBean
|
||||
.getStreamsConfiguration().getProperty(StreamsConfig.APPLICATION_ID_CONFIG)
|
||||
.equals(applicationId))
|
||||
.findFirst();
|
||||
|
||||
@@ -185,17 +185,29 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
}
|
||||
|
||||
if (methodAnnotatedOutboundNames != null && methodAnnotatedOutboundNames.length > 0) {
|
||||
methodAnnotatedInboundName = populateInboundIfMissing(method, methodAnnotatedInboundName);
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = this.kafkaStreamsBindingInformationCatalogue
|
||||
.getStreamsBuilderFactoryBeanPerBinding().get(methodAnnotatedInboundName);
|
||||
|
||||
if (result.getClass().isArray()) {
|
||||
Object[] outboundKStreams = (Object[]) result;
|
||||
int i = 0;
|
||||
for (Object outboundKStream : outboundKStreams) {
|
||||
final String methodAnnotatedOutboundName = methodAnnotatedOutboundNames[i++];
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(
|
||||
methodAnnotatedOutboundName, streamsBuilderFactoryBean);
|
||||
|
||||
Object targetBean = this.applicationContext
|
||||
.getBean(methodAnnotatedOutboundNames[i++]);
|
||||
.getBean(methodAnnotatedOutboundName);
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
|
||||
adaptStreamListenerResult(outboundKStream, targetBean);
|
||||
}
|
||||
}
|
||||
else {
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(
|
||||
methodAnnotatedOutboundNames[0], streamsBuilderFactoryBean);
|
||||
|
||||
Object targetBean = this.applicationContext
|
||||
.getBean(methodAnnotatedOutboundNames[0]);
|
||||
kafkaStreamsBindingInformationCatalogue.addOutboundKStreamResolvable(targetBean, ResolvableType.forMethodReturnType(method));
|
||||
@@ -210,6 +222,21 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
}
|
||||
}
|
||||
|
||||
private String populateInboundIfMissing(Method method, String methodAnnotatedInboundName) {
|
||||
if (!StringUtils.hasText(methodAnnotatedInboundName)) {
|
||||
Object[] arguments = new Object[method.getParameterTypes().length];
|
||||
if (arguments.length > 0) {
|
||||
MethodParameter methodParameter = MethodParameter.forExecutable(method, 0);
|
||||
if (methodParameter.hasParameterAnnotation(Input.class)) {
|
||||
Input methodAnnotation = methodParameter
|
||||
.getParameterAnnotation(Input.class);
|
||||
methodAnnotatedInboundName = methodAnnotation.value();
|
||||
}
|
||||
}
|
||||
}
|
||||
return methodAnnotatedInboundName;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private void adaptStreamListenerResult(Object outboundKStream, Object targetBean) {
|
||||
for (StreamListenerResultAdapter streamListenerResultAdapter : this.streamListenerResultAdapters) {
|
||||
@@ -292,8 +319,7 @@ class KafkaStreamsStreamListenerSetupMethodOrchestrator extends AbstractKafkaStr
|
||||
BindingProperties bindingProperties1 = this.kafkaStreamsBindingInformationCatalogue.getBindingProperties().get(kStreamWrapper);
|
||||
this.kafkaStreamsBindingInformationCatalogue.registerBindingProperties(stream, bindingProperties1);
|
||||
|
||||
this.kafkaStreamsBindingInformationCatalogue
|
||||
.addStreamBuilderFactory(streamsBuilderFactoryBean);
|
||||
this.kafkaStreamsBindingInformationCatalogue.addStreamBuilderFactoryPerBinding(inboundName, streamsBuilderFactoryBean);
|
||||
for (StreamListenerParameterAdapter streamListenerParameterAdapter : adapters) {
|
||||
if (streamListenerParameterAdapter.supports(stream.getClass(),
|
||||
methodParameter)) {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2020 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -34,7 +34,7 @@ public class MultiBinderPropertiesConfiguration {
|
||||
@Bean
|
||||
@ConfigurationProperties(prefix = "spring.cloud.stream.kafka.streams.binder")
|
||||
@ConditionalOnBean(name = "outerContext")
|
||||
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaProperties kafkaProperties) {
|
||||
public KafkaBinderConfigurationProperties binderConfigurationProperties(KafkaProperties kafkaProperties) {
|
||||
return new KafkaStreamsBinderConfigurationProperties(kafkaProperties);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,46 @@
|
||||
/*
|
||||
* Copyright 2021-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Map;
|
||||
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.streams.errors.DeserializationExceptionHandler;
|
||||
import org.apache.kafka.streams.processor.ProcessorContext;
|
||||
|
||||
/**
|
||||
*
|
||||
* {@link DeserializationExceptionHandler} that allows to silently skip
|
||||
* deserialization exceptions and continue processing.
|
||||
*
|
||||
* @author Soby Chakco
|
||||
* @since 3.1.2
|
||||
*/
|
||||
public class SkipAndContinueExceptionHandler implements DeserializationExceptionHandler {
|
||||
|
||||
@Override
|
||||
public DeserializationExceptionHandler.DeserializationHandlerResponse handle(final ProcessorContext context,
|
||||
final ConsumerRecord<byte[], byte[]> record,
|
||||
final Exception exception) {
|
||||
return DeserializationExceptionHandler.DeserializationHandlerResponse.CONTINUE;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void configure(final Map<String, ?> configs) {
|
||||
// ignore
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2018-2019 the original author or authors.
|
||||
* Copyright 2018-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,9 +18,12 @@ package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import java.util.Set;
|
||||
|
||||
import org.springframework.beans.factory.DisposableBean;
|
||||
import org.springframework.context.SmartLifecycle;
|
||||
import org.springframework.kafka.KafkaException;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.core.ProducerFactory;
|
||||
import org.springframework.kafka.streams.KafkaStreamsMicrometerListener;
|
||||
|
||||
/**
|
||||
* Iterate through all {@link StreamsBuilderFactoryBean} in the application context and
|
||||
@@ -43,14 +46,14 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
|
||||
private final KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics;
|
||||
|
||||
private final StreamsListener listener;
|
||||
private final KafkaStreamsMicrometerListener listener;
|
||||
|
||||
private volatile boolean running;
|
||||
|
||||
StreamsBuilderFactoryManager(KafkaStreamsBindingInformationCatalogue kafkaStreamsBindingInformationCatalogue,
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry,
|
||||
KafkaStreamsBinderMetrics kafkaStreamsBinderMetrics,
|
||||
StreamsListener listener) {
|
||||
KafkaStreamsMicrometerListener listener) {
|
||||
this.kafkaStreamsBindingInformationCatalogue = kafkaStreamsBindingInformationCatalogue;
|
||||
this.kafkaStreamsRegistry = kafkaStreamsRegistry;
|
||||
this.kafkaStreamsBinderMetrics = kafkaStreamsBinderMetrics;
|
||||
@@ -78,11 +81,11 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
.getStreamsBuilderFactoryBeans();
|
||||
int n = 0;
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
if (this.listener != null) {
|
||||
streamsBuilderFactoryBean.addListener(this.listener);
|
||||
}
|
||||
streamsBuilderFactoryBean.start();
|
||||
this.kafkaStreamsRegistry.registerKafkaStreams(streamsBuilderFactoryBean);
|
||||
if (this.listener != null) {
|
||||
this.listener.streamsAdded("streams." + n++, streamsBuilderFactoryBean.getKafkaStreams());
|
||||
}
|
||||
}
|
||||
if (this.kafkaStreamsBinderMetrics != null) {
|
||||
this.kafkaStreamsBinderMetrics.addMetrics(streamsBuilderFactoryBeans);
|
||||
@@ -103,10 +106,11 @@ class StreamsBuilderFactoryManager implements SmartLifecycle {
|
||||
.getStreamsBuilderFactoryBeans();
|
||||
int n = 0;
|
||||
for (StreamsBuilderFactoryBean streamsBuilderFactoryBean : streamsBuilderFactoryBeans) {
|
||||
streamsBuilderFactoryBean.removeListener(this.listener);
|
||||
streamsBuilderFactoryBean.stop();
|
||||
if (this.listener != null) {
|
||||
this.listener.streamsRemoved("streams." + n++, streamsBuilderFactoryBean.getKafkaStreams());
|
||||
}
|
||||
}
|
||||
for (ProducerFactory<byte[], byte[]> dlqProducerFactory : this.kafkaStreamsBindingInformationCatalogue.getDlqProducerFactories()) {
|
||||
((DisposableBean) dlqProducerFactory).destroy();
|
||||
}
|
||||
}
|
||||
catch (Exception ex) {
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
/*
|
||||
* Copyright 2020-2020 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams;
|
||||
|
||||
import org.apache.kafka.streams.KafkaStreams;
|
||||
|
||||
/**
|
||||
* Temporary workaround until SK 2.5.3 is available.
|
||||
*
|
||||
* @author Gary Russell
|
||||
* @since 3.0.6
|
||||
*
|
||||
*/
|
||||
interface StreamsListener {
|
||||
|
||||
/**
|
||||
* A new {@link KafkaStreams} was created.
|
||||
* @param id the streams id (factory bean name).
|
||||
* @param streams the streams;
|
||||
*/
|
||||
default void streamsAdded(String id, KafkaStreams streams) {
|
||||
}
|
||||
|
||||
/**
|
||||
* An existing {@link KafkaStreams} was removed.
|
||||
* @param id the streams id (factory bean name).
|
||||
* @param streams the streams;
|
||||
*/
|
||||
default void streamsRemoved(String id, KafkaStreams streams) {
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -34,6 +34,7 @@ import org.apache.kafka.streams.kstream.KTable;
|
||||
|
||||
import org.springframework.beans.factory.BeanFactoryUtils;
|
||||
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
|
||||
import org.springframework.beans.factory.config.BeanDefinition;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionOutcome;
|
||||
import org.springframework.boot.autoconfigure.condition.SpringBootCondition;
|
||||
import org.springframework.context.annotation.ConditionContext;
|
||||
@@ -81,18 +82,26 @@ public class FunctionDetectorCondition extends SpringBootCondition {
|
||||
return ConditionOutcome.noMatch("No match. No Function/BiFunction/Consumer beans found");
|
||||
}
|
||||
|
||||
private static List<String> pruneFunctionBeansForKafkaStreams(List<String> strings,
|
||||
private static List<String> pruneFunctionBeansForKafkaStreams(List<String> functionComponents,
|
||||
ConditionContext context) {
|
||||
final List<String> prunedList = new ArrayList<>();
|
||||
|
||||
for (String key : strings) {
|
||||
for (String key : functionComponents) {
|
||||
final Class<?> classObj = ClassUtils.resolveClassName(((AnnotatedBeanDefinition)
|
||||
context.getBeanFactory().getBeanDefinition(key))
|
||||
.getMetadata().getClassName(),
|
||||
ClassUtils.getDefaultClassLoader());
|
||||
try {
|
||||
|
||||
Method[] methods = classObj.getMethods();
|
||||
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
|
||||
// check if the bean name is overridden.
|
||||
if (!kafkaStreamMethod.isPresent()) {
|
||||
final BeanDefinition beanDefinition = context.getBeanFactory().getBeanDefinition(key);
|
||||
final String factoryMethodName = beanDefinition.getFactoryMethodName();
|
||||
kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(factoryMethodName)).findFirst();
|
||||
}
|
||||
|
||||
if (kafkaStreamMethod.isPresent()) {
|
||||
Method method = kafkaStreamMethod.get();
|
||||
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
|
||||
@@ -101,6 +110,20 @@ public class FunctionDetectorCondition extends SpringBootCondition {
|
||||
prunedList.add(key);
|
||||
}
|
||||
}
|
||||
else {
|
||||
//check if it is a @Component bean.
|
||||
Optional<Method> componentBeanMethod = Arrays.stream(methods).filter(
|
||||
m -> (m.getName().equals("apply") || m.getName().equals("accept"))
|
||||
&& isKafkaStreamsTypeFound(m)).findFirst();
|
||||
if (componentBeanMethod.isPresent()) {
|
||||
Method method = componentBeanMethod.get();
|
||||
final ResolvableType resolvableType1 = ResolvableType.forMethodParameter(method, 0);
|
||||
final Class<?> rawClass = resolvableType1.getRawClass();
|
||||
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
|
||||
prunedList.add(key);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (Exception e) {
|
||||
LOG.error("Function not found: " + key, e);
|
||||
@@ -108,4 +131,10 @@ public class FunctionDetectorCondition extends SpringBootCondition {
|
||||
}
|
||||
return prunedList;
|
||||
}
|
||||
|
||||
private static boolean isKafkaStreamsTypeFound(Method method) {
|
||||
return KStream.class.isAssignableFrom(method.getParameters()[0].getType()) ||
|
||||
KTable.class.isAssignableFrom(method.getParameters()[0].getType()) ||
|
||||
GlobalKTable.class.isAssignableFrom(method.getParameters()[0].getType());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedHashSet;
|
||||
@@ -72,29 +73,45 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
|
||||
private final ResolvableType type;
|
||||
|
||||
private final Method method;
|
||||
|
||||
private final String functionName;
|
||||
|
||||
private BeanFactory beanFactory;
|
||||
|
||||
|
||||
public KafkaStreamsBindableProxyFactory(ResolvableType type, String functionName) {
|
||||
public KafkaStreamsBindableProxyFactory(ResolvableType type, String functionName, Method method) {
|
||||
super(type.getType().getClass());
|
||||
this.type = type;
|
||||
this.functionName = functionName;
|
||||
this.method = method;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void afterPropertiesSet() {
|
||||
populateBindingTargetFactories(beanFactory);
|
||||
Assert.notEmpty(KafkaStreamsBindableProxyFactory.this.bindingTargetFactories,
|
||||
"'bindingTargetFactories' cannot be empty");
|
||||
|
||||
int resolvableTypeDepthCounter = 0;
|
||||
ResolvableType argument = this.type.getGeneric(resolvableTypeDepthCounter++);
|
||||
boolean isKafkaStreamsType = this.type.getRawClass().isAssignableFrom(KStream.class) ||
|
||||
this.type.getRawClass().isAssignableFrom(KTable.class) ||
|
||||
this.type.getRawClass().isAssignableFrom(GlobalKTable.class);
|
||||
ResolvableType argument = isKafkaStreamsType ? this.type : this.type.getGeneric(resolvableTypeDepthCounter++);
|
||||
List<String> inputBindings = buildInputBindings();
|
||||
Iterator<String> iterator = inputBindings.iterator();
|
||||
String next = iterator.next();
|
||||
bindInput(argument, next);
|
||||
|
||||
// Check if its a component style bean.
|
||||
if (method != null) {
|
||||
final Object bean = beanFactory.getBean(functionName);
|
||||
if (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass())) {
|
||||
argument = ResolvableType.forMethodParameter(method, 1);
|
||||
next = iterator.next();
|
||||
bindInput(argument, next);
|
||||
}
|
||||
}
|
||||
// Normal functional bean
|
||||
if (this.type.getRawClass() != null &&
|
||||
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
|
||||
this.type.getRawClass().isAssignableFrom(BiConsumer.class))) {
|
||||
@@ -104,6 +121,9 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
}
|
||||
ResolvableType outboundArgument = this.type.getGeneric(resolvableTypeDepthCounter);
|
||||
|
||||
if (method != null) {
|
||||
outboundArgument = ResolvableType.forMethodReturnType(method);
|
||||
}
|
||||
while (isAnotherFunctionOrConsumerFound(outboundArgument)) {
|
||||
//The function is a curried function. We should introspect the partial function chain hierarchy.
|
||||
argument = outboundArgument.getGeneric(0);
|
||||
@@ -112,8 +132,7 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
outboundArgument = outboundArgument.getGeneric(1);
|
||||
}
|
||||
|
||||
//Introspect output for binding.
|
||||
if (outboundArgument != null && outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
|
||||
if (outboundArgument.getRawClass() != null && (!outboundArgument.isArray() &&
|
||||
outboundArgument.getRawClass().isAssignableFrom(KStream.class))) {
|
||||
// if the type is array, we need to do a late binding as we don't know the number of
|
||||
// output bindings at this point in the flow.
|
||||
@@ -165,12 +184,31 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
int numberOfInputs = this.type.getRawClass() != null &&
|
||||
(this.type.getRawClass().isAssignableFrom(BiFunction.class) ||
|
||||
this.type.getRawClass().isAssignableFrom(BiConsumer.class)) ? 2 : getNumberOfInputs();
|
||||
|
||||
// For @Component style beans.
|
||||
if (method != null) {
|
||||
final ResolvableType returnType = ResolvableType.forMethodReturnType(method);
|
||||
Object bean = beanFactory.containsBean(functionName) ? beanFactory.getBean(functionName) : null;
|
||||
|
||||
if (bean != null && (BiFunction.class.isAssignableFrom(bean.getClass()) || BiConsumer.class.isAssignableFrom(bean.getClass()))) {
|
||||
numberOfInputs = 2;
|
||||
}
|
||||
else if (returnType.getRawClass().isAssignableFrom(Function.class) || returnType.getRawClass().isAssignableFrom(Consumer.class)) {
|
||||
numberOfInputs = 1;
|
||||
ResolvableType arg1 = returnType;
|
||||
|
||||
while (isAnotherFunctionOrConsumerFound(arg1)) {
|
||||
arg1 = arg1.getGeneric(1);
|
||||
numberOfInputs++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int i = 0;
|
||||
while (i < numberOfInputs) {
|
||||
inputs.add(String.format("%s-%s-%d", this.functionName, FunctionConstants.DEFAULT_INPUT_SUFFIX, i++));
|
||||
}
|
||||
return inputs;
|
||||
|
||||
}
|
||||
|
||||
private int getNumberOfInputs() {
|
||||
@@ -182,7 +220,6 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
numberOfInputs++;
|
||||
}
|
||||
return numberOfInputs;
|
||||
|
||||
}
|
||||
|
||||
private void bindInput(ResolvableType arg0, String inputName) {
|
||||
@@ -191,13 +228,10 @@ public class KafkaStreamsBindableProxyFactory extends AbstractBindableProxyFacto
|
||||
new BoundTargetHolder(getBindingTargetFactory(arg0.getRawClass())
|
||||
.createInput(inputName), true));
|
||||
}
|
||||
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
|
||||
RootBeanDefinition rootBeanDefinition = new RootBeanDefinition();
|
||||
rootBeanDefinition.setInstanceSupplier(() -> inputHolders.get(inputName).getBoundTarget());
|
||||
registry.registerBeanDefinition(inputName, rootBeanDefinition);
|
||||
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -38,7 +38,7 @@ public class KafkaStreamsFunctionAutoConfiguration {
|
||||
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
|
||||
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
|
||||
return new KafkaStreamsFunctionProcessorInvoker(kafkaStreamsFunctionBeanPostProcessor.getResolvableTypes(),
|
||||
kafkaStreamsFunctionProcessor, kafkaStreamsBindableProxyFactories);
|
||||
kafkaStreamsFunctionProcessor, kafkaStreamsBindableProxyFactories, kafkaStreamsFunctionBeanPostProcessor.getMethods());
|
||||
}
|
||||
|
||||
@Bean
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -18,6 +18,7 @@ package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
@@ -40,6 +41,7 @@ import org.springframework.beans.factory.BeanFactory;
|
||||
import org.springframework.beans.factory.BeanFactoryAware;
|
||||
import org.springframework.beans.factory.InitializingBean;
|
||||
import org.springframework.beans.factory.annotation.AnnotatedBeanDefinition;
|
||||
import org.springframework.beans.factory.config.BeanDefinition;
|
||||
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
|
||||
import org.springframework.beans.factory.support.BeanDefinitionRegistry;
|
||||
import org.springframework.beans.factory.support.RootBeanDefinition;
|
||||
@@ -48,10 +50,8 @@ import org.springframework.core.ResolvableType;
|
||||
import org.springframework.util.ClassUtils;
|
||||
|
||||
/**
|
||||
*
|
||||
* @author Soby Chacko
|
||||
* @since 2.2.0
|
||||
*
|
||||
*/
|
||||
public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean, BeanFactoryAware {
|
||||
|
||||
@@ -62,9 +62,13 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
private ConfigurableListableBeanFactory beanFactory;
|
||||
private boolean onlySingleFunction;
|
||||
private Map<String, ResolvableType> resolvableTypeMap = new TreeMap<>();
|
||||
private Map<String, Method> methods = new TreeMap<>();
|
||||
|
||||
private final StreamFunctionProperties streamFunctionProperties;
|
||||
|
||||
private Map<String, ResolvableType> kafkaStreamsOnlyResolvableTypes = new HashMap<>();
|
||||
private Map<String, Method> kafakStreamsOnlyMethods = new HashMap<>();
|
||||
|
||||
public KafkaStreamsFunctionBeanPostProcessor(StreamFunctionProperties streamFunctionProperties) {
|
||||
this.streamFunctionProperties = streamFunctionProperties;
|
||||
}
|
||||
@@ -73,6 +77,10 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
return this.resolvableTypeMap;
|
||||
}
|
||||
|
||||
public Map<String, Method> getMethods() {
|
||||
return methods;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void afterPropertiesSet() {
|
||||
String[] functionNames = this.beanFactory.getBeanNamesForType(Function.class);
|
||||
@@ -85,10 +93,14 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
Stream.concat(Stream.of(biFunctionNames), Stream.of(biConsumerNames)));
|
||||
final List<String> collect = concat.collect(Collectors.toList());
|
||||
collect.removeIf(s -> Arrays.stream(EXCLUDE_FUNCTIONS).anyMatch(t -> t.equals(s)));
|
||||
|
||||
onlySingleFunction = collect.size() == 1;
|
||||
collect.stream()
|
||||
.forEach(this::extractResolvableTypes);
|
||||
|
||||
kafkaStreamsOnlyResolvableTypes.keySet().forEach(k -> addResolvableTypeInfo(k, kafkaStreamsOnlyResolvableTypes.get(k)));
|
||||
kafakStreamsOnlyMethods.keySet().forEach(k -> addResolvableTypeInfo(k, kafakStreamsOnlyMethods.get(k)));
|
||||
|
||||
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) beanFactory;
|
||||
|
||||
for (String s : getResolvableTypes().keySet()) {
|
||||
@@ -98,6 +110,8 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
.addGenericArgumentValue(getResolvableTypes().get(s));
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(s);
|
||||
rootBeanDefinition.getConstructorArgumentValues()
|
||||
.addGenericArgumentValue(getMethods().get(s));
|
||||
registry.registerBeanDefinition("kafkaStreamsBindableProxyFactory-" + s, rootBeanDefinition);
|
||||
}
|
||||
}
|
||||
@@ -109,9 +123,15 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
ClassUtils.getDefaultClassLoader());
|
||||
try {
|
||||
Method[] methods = classObj.getMethods();
|
||||
Optional<Method> kafkaStreamMethod = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
|
||||
if (kafkaStreamMethod.isPresent()) {
|
||||
Method method = kafkaStreamMethod.get();
|
||||
Optional<Method> functionalBeanMethods = Arrays.stream(methods).filter(m -> m.getName().equals(key)).findFirst();
|
||||
if (!functionalBeanMethods.isPresent()) {
|
||||
final BeanDefinition beanDefinition = this.beanFactory.getBeanDefinition(key);
|
||||
final String factoryMethodName = beanDefinition.getFactoryMethodName();
|
||||
functionalBeanMethods = Arrays.stream(methods).filter(m -> m.getName().equals(factoryMethodName)).findFirst();
|
||||
}
|
||||
|
||||
if (functionalBeanMethods.isPresent()) {
|
||||
Method method = functionalBeanMethods.get();
|
||||
ResolvableType resolvableType = ResolvableType.forMethodReturnType(method, classObj);
|
||||
final Class<?> rawClass = resolvableType.getGeneric(0).getRawClass();
|
||||
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
|
||||
@@ -119,12 +139,25 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
}
|
||||
else {
|
||||
final String definition = streamFunctionProperties.getDefinition();
|
||||
if (definition == null) {
|
||||
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
|
||||
}
|
||||
else if (definition.contains(key)) {
|
||||
discoverOnlyKafkaStreamsResolvableTypes(key, resolvableType);
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
Optional<Method> componentBeanMethods = Arrays.stream(methods)
|
||||
.filter(m -> m.getName().equals("apply") && isKafkaStreamsTypeFound(m) ||
|
||||
m.getName().equals("accept") && isKafkaStreamsTypeFound(m)).findFirst();
|
||||
if (componentBeanMethods.isPresent()) {
|
||||
Method method = componentBeanMethods.get();
|
||||
final ResolvableType resolvableType = ResolvableType.forMethodParameter(method, 0);
|
||||
final Class<?> rawClass = resolvableType.getRawClass();
|
||||
if (rawClass == KStream.class || rawClass == KTable.class || rawClass == GlobalKTable.class) {
|
||||
if (onlySingleFunction) {
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
this.methods.put(key, method);
|
||||
}
|
||||
else {
|
||||
discoverOnlyKafkaStreamsResolvableTypesAndMethods(key, resolvableType, method);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -135,6 +168,51 @@ public class KafkaStreamsFunctionBeanPostProcessor implements InitializingBean,
|
||||
}
|
||||
}
|
||||
|
||||
private void addResolvableTypeInfo(String key, ResolvableType resolvableType) {
|
||||
if (kafkaStreamsOnlyResolvableTypes.size() == 1) {
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
}
|
||||
else {
|
||||
final String definition = streamFunctionProperties.getDefinition();
|
||||
if (definition == null) {
|
||||
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
|
||||
}
|
||||
else if (definition.contains(key)) {
|
||||
resolvableTypeMap.put(key, resolvableType);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private void discoverOnlyKafkaStreamsResolvableTypes(String key, ResolvableType resolvableType) {
|
||||
kafkaStreamsOnlyResolvableTypes.put(key, resolvableType);
|
||||
}
|
||||
|
||||
private void discoverOnlyKafkaStreamsResolvableTypesAndMethods(String key, ResolvableType resolvableType, Method method) {
|
||||
kafkaStreamsOnlyResolvableTypes.put(key, resolvableType);
|
||||
kafakStreamsOnlyMethods.put(key, method);
|
||||
}
|
||||
|
||||
private void addResolvableTypeInfo(String key, Method method) {
|
||||
if (kafakStreamsOnlyMethods.size() == 1) {
|
||||
this.methods.put(key, method);
|
||||
}
|
||||
else {
|
||||
final String definition = streamFunctionProperties.getDefinition();
|
||||
if (definition == null) {
|
||||
throw new IllegalStateException("Multiple functions found, but function definition property is not set.");
|
||||
}
|
||||
else if (definition.contains(key)) {
|
||||
this.methods.put(key, method);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private boolean isKafkaStreamsTypeFound(Method method) {
|
||||
return KStream.class.isAssignableFrom(method.getParameters()[0].getType()) ||
|
||||
KTable.class.isAssignableFrom(method.getParameters()[0].getType()) ||
|
||||
GlobalKTable.class.isAssignableFrom(method.getParameters()[0].getType());
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
|
||||
this.beanFactory = (ConfigurableListableBeanFactory) beanFactory;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2019-2019 the original author or authors.
|
||||
* Copyright 2019-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -16,6 +16,7 @@
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.lang.reflect.Method;
|
||||
import java.util.Arrays;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
@@ -35,13 +36,16 @@ public class KafkaStreamsFunctionProcessorInvoker {
|
||||
private final KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor;
|
||||
private final Map<String, ResolvableType> resolvableTypeMap;
|
||||
private final KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories;
|
||||
private final Map<String, Method> methods;
|
||||
|
||||
public KafkaStreamsFunctionProcessorInvoker(Map<String, ResolvableType> resolvableTypeMap,
|
||||
KafkaStreamsFunctionProcessor kafkaStreamsFunctionProcessor,
|
||||
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories) {
|
||||
KafkaStreamsBindableProxyFactory[] kafkaStreamsBindableProxyFactories,
|
||||
Map<String, Method> methods) {
|
||||
this.kafkaStreamsFunctionProcessor = kafkaStreamsFunctionProcessor;
|
||||
this.resolvableTypeMap = resolvableTypeMap;
|
||||
this.kafkaStreamsBindableProxyFactories = kafkaStreamsBindableProxyFactories;
|
||||
this.methods = methods;
|
||||
}
|
||||
|
||||
@PostConstruct
|
||||
@@ -49,7 +53,7 @@ public class KafkaStreamsFunctionProcessorInvoker {
|
||||
resolvableTypeMap.forEach((key, value) -> {
|
||||
Optional<KafkaStreamsBindableProxyFactory> proxyFactory =
|
||||
Arrays.stream(kafkaStreamsBindableProxyFactories).filter(p -> p.getFunctionName().equals(key)).findFirst();
|
||||
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key, proxyFactory.get());
|
||||
this.kafkaStreamsFunctionProcessor.setupFunctionInvokerForKafkaStreams(value, key, proxyFactory.get(), methods.get(key));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -20,6 +20,8 @@ import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
@@ -29,7 +31,9 @@ import org.apache.kafka.clients.producer.ProducerRecord;
|
||||
import org.apache.kafka.common.header.Headers;
|
||||
import org.apache.kafka.common.header.internals.RecordHeader;
|
||||
import org.apache.kafka.common.header.internals.RecordHeaders;
|
||||
import org.apache.kafka.streams.kstream.GlobalKTable;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.apache.kafka.streams.kstream.KTable;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
@@ -48,6 +52,7 @@ import org.springframework.kafka.support.serializer.JsonSerializer;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.util.Assert;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
@@ -61,6 +66,8 @@ public class KafkaStreamsEventTypeRoutingTests {
|
||||
|
||||
private static Consumer<Integer, Foo> consumer;
|
||||
|
||||
private static CountDownLatch LATCH = new CountDownLatch(3);
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("test-group-1", "false",
|
||||
@@ -149,6 +156,61 @@ public class KafkaStreamsEventTypeRoutingTests {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testRoutingWorksBasedOnEventTypesConsumer() throws Exception {
|
||||
SpringApplication app = new SpringApplication(EventTypeRoutingTestConfig.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.definition=consumer",
|
||||
"--spring.cloud.stream.bindings.consumer-in-0.destination=foo-consumer-1",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.consumer-in-0.consumer.eventTypes=foo,bar",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.consumer.applicationId=consumer-id-foo-0",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
senderProps.put("value.serializer", JsonSerializer.class);
|
||||
DefaultKafkaProducerFactory<Integer, Foo> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, Foo> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("foo-consumer-1");
|
||||
Foo foo1 = new Foo();
|
||||
foo1.setFoo("foo-1");
|
||||
Headers headers = new RecordHeaders();
|
||||
headers.add(new RecordHeader("event_type", "foo".getBytes()));
|
||||
|
||||
final ProducerRecord<Integer, Foo> producerRecord1 = new ProducerRecord<>("foo-consumer-1", 0, 56, foo1, headers);
|
||||
template.send(producerRecord1);
|
||||
|
||||
Foo foo2 = new Foo();
|
||||
foo2.setFoo("foo-2");
|
||||
|
||||
final ProducerRecord<Integer, Foo> producerRecord2 = new ProducerRecord<>("foo-consumer-1", 0, 57, foo2);
|
||||
template.send(producerRecord2);
|
||||
|
||||
Foo foo3 = new Foo();
|
||||
foo3.setFoo("foo-3");
|
||||
|
||||
final ProducerRecord<Integer, Foo> producerRecord3 = new ProducerRecord<>("foo-consumer-1", 0, 58, foo3, headers);
|
||||
template.send(producerRecord3);
|
||||
|
||||
Foo foo4 = new Foo();
|
||||
foo4.setFoo("foo-4");
|
||||
Headers headers1 = new RecordHeaders();
|
||||
headers1.add(new RecordHeader("event_type", "bar".getBytes()));
|
||||
|
||||
final ProducerRecord<Integer, Foo> producerRecord4 = new ProducerRecord<>("foo-consumer-1", 0, 59, foo4, headers1);
|
||||
template.send(producerRecord4);
|
||||
|
||||
Assert.isTrue(LATCH.await(10, TimeUnit.SECONDS), "Foo");
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@EnableAutoConfiguration
|
||||
public static class EventTypeRoutingTestConfig {
|
||||
|
||||
@@ -157,6 +219,19 @@ public class KafkaStreamsEventTypeRoutingTests {
|
||||
return input -> input;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public java.util.function.Consumer<KTable<Integer, Foo>> consumer() {
|
||||
return ktable -> ktable.toStream().foreach((key, value) -> {
|
||||
LATCH.countDown();
|
||||
});
|
||||
}
|
||||
|
||||
@Bean
|
||||
public java.util.function.Consumer<GlobalKTable<Integer, Foo>> global() {
|
||||
return ktable -> {
|
||||
};
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static class Foo {
|
||||
|
||||
@@ -102,6 +102,7 @@ public class KafkaStreamsInteractiveQueryIntegrationTests {
|
||||
Mockito.when(mock.getKafkaStreams()).thenReturn(mockKafkaStreams);
|
||||
KafkaStreamsRegistry kafkaStreamsRegistry = new KafkaStreamsRegistry();
|
||||
kafkaStreamsRegistry.registerKafkaStreams(mock);
|
||||
Mockito.when(mock.isRunning()).thenReturn(true);
|
||||
KafkaStreamsBinderConfigurationProperties binderConfigurationProperties =
|
||||
new KafkaStreamsBinderConfigurationProperties(new KafkaProperties());
|
||||
binderConfigurationProperties.getStateStoreRetry().setMaxAttempts(3);
|
||||
|
||||
@@ -17,9 +17,11 @@
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.Collection;
|
||||
import java.util.Date;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.Function;
|
||||
@@ -40,15 +42,22 @@ import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.beans.DirectFieldAccessor;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.binder.Binding;
|
||||
import org.springframework.cloud.stream.binder.DefaultBinding;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.InteractiveQueryService;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsRegistry;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.endpoint.KafkaStreamsTopologyEndpoint;
|
||||
import org.springframework.cloud.stream.binding.InputBindingLifecycle;
|
||||
import org.springframework.cloud.stream.binding.OutputBindingLifecycle;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.context.Lifecycle;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBean;
|
||||
import org.springframework.kafka.config.StreamsBuilderFactoryBeanCustomizer;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
@@ -90,7 +99,7 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
|
||||
@Test
|
||||
@SuppressWarnings("unchecked")
|
||||
public void testKstreamWordCountFunction() throws Exception {
|
||||
public void testBasicKStreamTopologyExecution() throws Exception {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
@@ -109,9 +118,15 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
"=org.apache.kafka.common.serialization.Serdes$StringSerde",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
receiveAndValidate("words", "counts");
|
||||
|
||||
final MeterRegistry meterRegistry = context.getBean(MeterRegistry.class);
|
||||
Thread.sleep(100);
|
||||
assertThat(meterRegistry.getMeters().size() > 1).isTrue();
|
||||
|
||||
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.stream.thread.poll.records.max"))).isTrue();
|
||||
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.consumer.network.io.total"))).isTrue();
|
||||
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.producer.record.send.total"))).isTrue();
|
||||
assertThat(meterRegistry.getMeters().stream().anyMatch(m -> m.getId().getName().equals("kafka.admin.client.network.io.total"))).isTrue();
|
||||
|
||||
Assert.isTrue(LATCH.await(5, TimeUnit.SECONDS), "Failed to call customizers");
|
||||
//Testing topology endpoint
|
||||
final KafkaStreamsRegistry kafkaStreamsRegistry = context.getBean(KafkaStreamsRegistry.class);
|
||||
@@ -126,6 +141,31 @@ public class KafkaStreamsBinderWordCountFunctionTests {
|
||||
Map<String, Object> streamConfigGlobalProperties = (Map<String, Object>) context.getBean("streamConfigGlobalProperties");
|
||||
assertThat(streamConfigGlobalProperties.get("request.timeout.ms")).isEqualTo("29000");
|
||||
assertThat(streamConfigGlobalProperties.get("max.block.ms")).isEqualTo("90000");
|
||||
|
||||
InputBindingLifecycle inputBindingLifecycle = context.getBean(InputBindingLifecycle.class);
|
||||
final Collection<Binding<Object>> inputBindings = (Collection<Binding<Object>>) new DirectFieldAccessor(inputBindingLifecycle)
|
||||
.getPropertyValue("inputBindings");
|
||||
assertThat(inputBindings).isNotNull();
|
||||
final Optional<Binding<Object>> theOnlyInputBinding = inputBindings.stream().findFirst();
|
||||
assertThat(theOnlyInputBinding.isPresent()).isTrue();
|
||||
final DefaultBinding<Object> objectBinding = (DefaultBinding<Object>) theOnlyInputBinding.get();
|
||||
assertThat(objectBinding.getBindingName()).isEqualTo("process-in-0");
|
||||
|
||||
final Lifecycle lifecycle = (Lifecycle) new DirectFieldAccessor(objectBinding).getPropertyValue("lifecycle");
|
||||
final StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean(StreamsBuilderFactoryBean.class);
|
||||
assertThat(lifecycle).isEqualTo(streamsBuilderFactoryBean);
|
||||
|
||||
OutputBindingLifecycle outputBindingLifecycle = context.getBean(OutputBindingLifecycle.class);
|
||||
final Collection<Binding<Object>> outputBindings = (Collection<Binding<Object>>) new DirectFieldAccessor(outputBindingLifecycle)
|
||||
.getPropertyValue("outputBindings");
|
||||
assertThat(outputBindings).isNotNull();
|
||||
final Optional<Binding<Object>> theOnlyOutputBinding = outputBindings.stream().findFirst();
|
||||
assertThat(theOnlyOutputBinding.isPresent()).isTrue();
|
||||
final DefaultBinding<Object> objectBinding1 = (DefaultBinding<Object>) theOnlyOutputBinding.get();
|
||||
assertThat(objectBinding1.getBindingName()).isEqualTo("process-out-0");
|
||||
|
||||
final Lifecycle lifecycle1 = (Lifecycle) new DirectFieldAccessor(objectBinding1).getPropertyValue("lifecycle");
|
||||
assertThat(lifecycle1).isEqualTo(streamsBuilderFactoryBean);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -0,0 +1,347 @@
|
||||
/*
|
||||
* Copyright 2021-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* https://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package org.springframework.cloud.stream.binder.kafka.streams.function;
|
||||
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CountDownLatch;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.BiConsumer;
|
||||
import java.util.function.BiFunction;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.kafka.clients.consumer.Consumer;
|
||||
import org.apache.kafka.clients.consumer.ConsumerConfig;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecord;
|
||||
import org.apache.kafka.clients.consumer.ConsumerRecords;
|
||||
import org.apache.kafka.streams.KeyValue;
|
||||
import org.apache.kafka.streams.kstream.KStream;
|
||||
import org.junit.AfterClass;
|
||||
import org.junit.BeforeClass;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
|
||||
import org.springframework.kafka.core.KafkaTemplate;
|
||||
import org.springframework.kafka.test.EmbeddedKafkaBroker;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
import org.springframework.kafka.test.utils.KafkaTestUtils;
|
||||
import org.springframework.stereotype.Component;
|
||||
import org.springframework.util.Assert;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
* @author Soby Chacko
|
||||
*/
|
||||
public class KafkaStreamsComponentBeansTests {
|
||||
|
||||
@ClassRule
|
||||
public static EmbeddedKafkaRule embeddedKafkaRule = new EmbeddedKafkaRule(1, true,
|
||||
"testFunctionComponent-out", "testBiFunctionComponent-out", "testCurriedFunctionWithFunctionTerminal-out");
|
||||
|
||||
private static EmbeddedKafkaBroker embeddedKafka = embeddedKafkaRule.getEmbeddedKafka();
|
||||
|
||||
private static Consumer<String, String> consumer1;
|
||||
private static Consumer<String, String> consumer2;
|
||||
private static Consumer<String, String> consumer3;
|
||||
|
||||
private final static CountDownLatch LATCH_1 = new CountDownLatch(1);
|
||||
private final static CountDownLatch LATCH_2 = new CountDownLatch(2);
|
||||
private final static CountDownLatch LATCH_3 = new CountDownLatch(3);
|
||||
|
||||
@BeforeClass
|
||||
public static void setUp() {
|
||||
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group", "false",
|
||||
embeddedKafka);
|
||||
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
|
||||
consumer1 = cf.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer1, "testFunctionComponent-out");
|
||||
|
||||
Map<String, Object> consumerProps1 = KafkaTestUtils.consumerProps("group-x", "false",
|
||||
embeddedKafka);
|
||||
consumerProps1.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps1.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf1 = new DefaultKafkaConsumerFactory<>(consumerProps1);
|
||||
consumer2 = cf1.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer2, "testBiFunctionComponent-out");
|
||||
|
||||
Map<String, Object> consumerProps2 = KafkaTestUtils.consumerProps("group-y", "false",
|
||||
embeddedKafka);
|
||||
consumerProps2.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
|
||||
consumerProps2.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
|
||||
DefaultKafkaConsumerFactory<String, String> cf2 = new DefaultKafkaConsumerFactory<>(consumerProps2);
|
||||
consumer3 = cf2.createConsumer();
|
||||
embeddedKafka.consumeFromEmbeddedTopics(consumer3, "testCurriedFunctionWithFunctionTerminal-out");
|
||||
}
|
||||
|
||||
@AfterClass
|
||||
public static void tearDown() {
|
||||
consumer1.close();
|
||||
consumer2.close();
|
||||
consumer3.close();
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testFunctionComponent() {
|
||||
SpringApplication app = new SpringApplication(FunctionAsComponent.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext ignored = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.foo-in-0.destination=testFunctionComponent-in",
|
||||
"--spring.cloud.stream.bindings.foo-out-0.destination=testFunctionComponent-out",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testFunctionComponent-in");
|
||||
template.sendDefault("foobar");
|
||||
ConsumerRecord<String, String> cr = KafkaTestUtils.getSingleRecord(consumer1, "testFunctionComponent-out");
|
||||
assertThat(cr.value().contains("foobarfoobar")).isTrue();
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testConsumerComponent() throws Exception {
|
||||
SpringApplication app = new SpringApplication(ConsumerAsComponent.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.bar-in-0.destination=testConsumerComponent-in",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testConsumerComponent-in");
|
||||
template.sendDefault("foobar");
|
||||
Assert.isTrue(LATCH_1.await(10, TimeUnit.SECONDS), "bar");
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testBiFunctionComponent() {
|
||||
SpringApplication app = new SpringApplication(BiFunctionAsComponent.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext ignored = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.bazz-in-0.destination=testBiFunctionComponent-in-0",
|
||||
"--spring.cloud.stream.bindings.bazz-in-1.destination=testBiFunctionComponent-in-1",
|
||||
"--spring.cloud.stream.bindings.bazz-out-0.destination=testBiFunctionComponent-out",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testBiFunctionComponent-in-0");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testBiFunctionComponent-in-1");
|
||||
template.sendDefault("foobar");
|
||||
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer2, 10_000, 2);
|
||||
assertThat(records.count()).isEqualTo(2);
|
||||
records.forEach(stringStringConsumerRecord -> assertThat(stringStringConsumerRecord.value().contains("foobar")).isTrue());
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testBiConsumerComponent() throws Exception {
|
||||
SpringApplication app = new SpringApplication(BiConsumerAsComponent.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.buzz-in-0.destination=testBiConsumerComponent-in-0",
|
||||
"--spring.cloud.stream.bindings.buzz-in-1.destination=testBiConsumerComponent-in-1",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testBiConsumerComponent-in-0");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testBiConsumerComponent-in-1");
|
||||
template.sendDefault("foobar");
|
||||
Assert.isTrue(LATCH_2.await(10, TimeUnit.SECONDS), "bar");
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCurriedFunctionWithConsumerTerminal() throws Exception {
|
||||
SpringApplication app = new SpringApplication(CurriedFunctionWithConsumerTerminal.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.curriedConsumer-in-0.destination=testCurriedFunctionWithConsumerTerminal-in-0",
|
||||
"--spring.cloud.stream.bindings.curriedConsumer-in-1.destination=testCurriedFunctionWithConsumerTerminal-in-1",
|
||||
"--spring.cloud.stream.bindings.curriedConsumer-in-2.destination=testCurriedFunctionWithConsumerTerminal-in-2",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-0");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-1");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testCurriedFunctionWithConsumerTerminal-in-2");
|
||||
template.sendDefault("foobar");
|
||||
Assert.isTrue(LATCH_3.await(10, TimeUnit.SECONDS), "bar");
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testCurriedFunctionWithFunctionTerminal() {
|
||||
SpringApplication app = new SpringApplication(CurriedFunctionWithFunctionTerminal.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.bindings.curriedFunction-in-0.destination=testCurriedFunctionWithFunctionTerminal-in-0",
|
||||
"--spring.cloud.stream.bindings.curriedFunction-in-1.destination=testCurriedFunctionWithFunctionTerminal-in-1",
|
||||
"--spring.cloud.stream.bindings.curriedFunction-in-2.destination=testCurriedFunctionWithFunctionTerminal-in-2",
|
||||
"--spring.cloud.stream.bindings.curriedFunction-out-0.destination=testCurriedFunctionWithFunctionTerminal-out",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers=" + embeddedKafka.getBrokersAsString())) {
|
||||
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
|
||||
DefaultKafkaProducerFactory<Integer, String> pf = new DefaultKafkaProducerFactory<>(senderProps);
|
||||
try {
|
||||
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(pf, true);
|
||||
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-0");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-1");
|
||||
template.sendDefault("foobar");
|
||||
template.setDefaultTopic("testCurriedFunctionWithFunctionTerminal-in-2");
|
||||
template.sendDefault("foobar");
|
||||
final ConsumerRecords<String, String> records = KafkaTestUtils.getRecords(consumer3, 10_000, 3);
|
||||
assertThat(records.count()).isEqualTo(3);
|
||||
records.forEach(stringStringConsumerRecord -> assertThat(stringStringConsumerRecord.value().contains("foobar")).isTrue());
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Component("foo")
|
||||
@EnableAutoConfiguration
|
||||
public static class FunctionAsComponent implements Function<KStream<Integer, String>,
|
||||
KStream<String, String>> {
|
||||
|
||||
@Override
|
||||
public KStream<String, String> apply(KStream<Integer, String> stringIntegerKStream) {
|
||||
return stringIntegerKStream.map((integer, s) -> new KeyValue<>(s, s + s));
|
||||
}
|
||||
}
|
||||
|
||||
@Component("bar")
|
||||
@EnableAutoConfiguration
|
||||
public static class ConsumerAsComponent implements java.util.function.Consumer<KStream<Integer, String>> {
|
||||
|
||||
@Override
|
||||
public void accept(KStream<Integer, String> integerStringKStream) {
|
||||
integerStringKStream.foreach((integer, s) -> LATCH_1.countDown());
|
||||
}
|
||||
}
|
||||
|
||||
@Component("bazz")
|
||||
@EnableAutoConfiguration
|
||||
public static class BiFunctionAsComponent implements BiFunction<KStream<String, String>, KStream<String, String>, KStream<String, String>> {
|
||||
|
||||
@Override
|
||||
public KStream<String, String> apply(KStream<String, String> stringStringKStream, KStream<String, String> stringStringKStream2) {
|
||||
return stringStringKStream.merge(stringStringKStream2);
|
||||
}
|
||||
}
|
||||
|
||||
@Component("buzz")
|
||||
@EnableAutoConfiguration
|
||||
public static class BiConsumerAsComponent implements BiConsumer<KStream<String, String>, KStream<String, String>> {
|
||||
|
||||
@Override
|
||||
public void accept(KStream<String, String> stringStringKStream, KStream<String, String> stringStringKStream2) {
|
||||
final KStream<String, String> merged = stringStringKStream.merge(stringStringKStream2);
|
||||
merged.foreach((s, s2) -> LATCH_2.countDown());
|
||||
}
|
||||
}
|
||||
|
||||
@Component("curriedConsumer")
|
||||
@EnableAutoConfiguration
|
||||
public static class CurriedFunctionWithConsumerTerminal implements Function<KStream<String, String>,
|
||||
Function<KStream<String, String>,
|
||||
java.util.function.Consumer<KStream<String, String>>>> {
|
||||
|
||||
@Override
|
||||
public Function<KStream<String, String>, java.util.function.Consumer<KStream<String, String>>> apply(KStream<String, String> stringStringKStream) {
|
||||
return stringStringKStream1 -> stringStringKStream2 -> {
|
||||
final KStream<String, String> merge1 = stringStringKStream.merge(stringStringKStream1);
|
||||
final KStream<String, String> merged2 = merge1.merge(stringStringKStream2);
|
||||
merged2.foreach((s1, s2) -> LATCH_3.countDown());
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@Component("curriedFunction")
|
||||
@EnableAutoConfiguration
|
||||
public static class CurriedFunctionWithFunctionTerminal implements Function<KStream<String, String>,
|
||||
Function<KStream<String, String>,
|
||||
java.util.function.Function<KStream<String, String>, KStream<String, String>>>> {
|
||||
|
||||
@Override
|
||||
public Function<KStream<String, String>, Function<KStream<String, String>, KStream<String, String>>> apply(KStream<String, String> stringStringKStream) {
|
||||
return stringStringKStream1 -> stringStringKStream2 -> {
|
||||
final KStream<String, String> merge1 = stringStringKStream.merge(stringStringKStream1);
|
||||
return merge1.merge(stringStringKStream2);
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -60,10 +60,10 @@ public class KafkaStreamsFunctionStateStoreTests {
|
||||
|
||||
try (ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.stream.function.definition=process;hello",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=words",
|
||||
"--spring.cloud.stream.function.definition=biConsumerBean;hello",
|
||||
"--spring.cloud.stream.bindings.biConsumerBean-in-0.destination=words",
|
||||
"--spring.cloud.stream.bindings.hello-in-0.destination=words",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.process.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-123",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.changed.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-123",
|
||||
"--spring.cloud.stream.kafka.streams.binder.functions.hello.applicationId=testKafkaStreamsFuncionWithMultipleStateStores-456",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000",
|
||||
"--spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde" +
|
||||
@@ -121,7 +121,7 @@ public class KafkaStreamsFunctionStateStoreTests {
|
||||
boolean processed1;
|
||||
boolean processed2;
|
||||
|
||||
@Bean
|
||||
@Bean(name = "biConsumerBean")
|
||||
public java.util.function.BiConsumer<KStream<Object, String>, KStream<Object, String>> process() {
|
||||
return (input0, input1) ->
|
||||
input0.process((ProcessorSupplier<Object, String>) () -> new Processor<Object, String>() {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2020-2020 the original author or authors.
|
||||
* Copyright 2020-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -36,6 +36,7 @@ import org.junit.Test;
|
||||
import org.springframework.boot.SpringApplication;
|
||||
import org.springframework.boot.WebApplicationType;
|
||||
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
|
||||
import org.springframework.cloud.stream.binder.kafka.streams.KafkaStreamsBindingInformationCatalogue;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
import org.springframework.context.ConfigurableApplicationContext;
|
||||
@@ -67,7 +68,7 @@ public class DlqDestinationResolverTests {
|
||||
SpringApplication app = new SpringApplication(WordCountProcessorApplication.class);
|
||||
app.setWebApplicationType(WebApplicationType.NONE);
|
||||
|
||||
try (ConfigurableApplicationContext ignored = app.run(
|
||||
try (ConfigurableApplicationContext context = app.run(
|
||||
"--server.port=0",
|
||||
"--spring.jmx.enabled=false",
|
||||
"--spring.cloud.function.definition=process",
|
||||
@@ -104,6 +105,9 @@ public class DlqDestinationResolverTests {
|
||||
ConsumerRecord<String, String> cr2 = KafkaTestUtils.getSingleRecord(consumer1,
|
||||
"topic2-dlq");
|
||||
assertThat(cr2.value()).isEqualTo("foobar");
|
||||
|
||||
final KafkaStreamsBindingInformationCatalogue catalogue = context.getBean(KafkaStreamsBindingInformationCatalogue.class);
|
||||
assertThat(catalogue.getDlqProducerFactories().size()).isEqualTo(1);
|
||||
}
|
||||
finally {
|
||||
pf.destroy();
|
||||
|
||||
@@ -76,7 +76,6 @@ public class KafkaStreamsBinderDestinationIsPatternTests {
|
||||
ConfigurableApplicationContext context = app.run("--server.port=0",
|
||||
"--spring.cloud.stream.bindings.process-out-0.destination=out",
|
||||
"--spring.cloud.stream.bindings.process-in-0.destination=in.*",
|
||||
"--spring.cloud.stream.bindings.process-in-0.consumer.use-native-decoding=false",
|
||||
"--spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.destinationIsPattern=true",
|
||||
"--spring.cloud.stream.kafka.streams.binder.brokers="
|
||||
+ embeddedKafka.getBrokersAsString());
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
<parent>
|
||||
<groupId>org.springframework.cloud</groupId>
|
||||
<artifactId>spring-cloud-stream-binder-kafka-parent</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.1.3</version>
|
||||
</parent>
|
||||
|
||||
<dependencies>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2016-2018 the original author or authors.
|
||||
* Copyright 2016-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -17,6 +17,8 @@
|
||||
package org.springframework.cloud.stream.binder.kafka;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
@@ -34,7 +36,10 @@ import org.apache.kafka.common.PartitionInfo;
|
||||
import org.springframework.beans.factory.DisposableBean;
|
||||
import org.springframework.boot.actuate.health.Health;
|
||||
import org.springframework.boot.actuate.health.HealthIndicator;
|
||||
import org.springframework.boot.actuate.health.Status;
|
||||
import org.springframework.boot.actuate.health.StatusAggregator;
|
||||
import org.springframework.kafka.core.ConsumerFactory;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
import org.springframework.scheduling.concurrent.CustomizableThreadFactory;
|
||||
|
||||
/**
|
||||
@@ -48,6 +53,7 @@ import org.springframework.scheduling.concurrent.CustomizableThreadFactory;
|
||||
* @author Soby Chacko
|
||||
* @author Vladislav Fefelov
|
||||
* @author Chukwubuikem Ume-Ugwa
|
||||
* @author Taras Danylchuk
|
||||
*/
|
||||
public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBean {
|
||||
|
||||
@@ -86,7 +92,22 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
|
||||
|
||||
@Override
|
||||
public Health health() {
|
||||
Future<Health> future = executor.submit(this::buildHealthStatus);
|
||||
Health topicsHealth = safelyBuildTopicsHealth();
|
||||
Health listenerContainersHealth = buildListenerContainersHealth();
|
||||
return merge(topicsHealth, listenerContainersHealth);
|
||||
}
|
||||
|
||||
private Health merge(Health topicsHealth, Health listenerContainersHealth) {
|
||||
Status aggregatedStatus = StatusAggregator.getDefault()
|
||||
.getAggregateStatus(topicsHealth.getStatus(), listenerContainersHealth.getStatus());
|
||||
Map<String, Object> aggregatedDetails = new HashMap<>();
|
||||
aggregatedDetails.putAll(topicsHealth.getDetails());
|
||||
aggregatedDetails.putAll(listenerContainersHealth.getDetails());
|
||||
return Health.status(aggregatedStatus).withDetails(aggregatedDetails).build();
|
||||
}
|
||||
|
||||
private Health safelyBuildTopicsHealth() {
|
||||
Future<Health> future = executor.submit(this::buildTopicsHealth);
|
||||
try {
|
||||
return future.get(this.timeout, TimeUnit.SECONDS);
|
||||
}
|
||||
@@ -112,10 +133,11 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
|
||||
}
|
||||
}
|
||||
|
||||
private Health buildHealthStatus() {
|
||||
private Health buildTopicsHealth() {
|
||||
try {
|
||||
initMetadataConsumer();
|
||||
Set<String> downMessages = new HashSet<>();
|
||||
Set<String> checkedTopics = new HashSet<>();
|
||||
final Map<String, KafkaMessageChannelBinder.TopicInformation> topicsInUse = KafkaBinderHealthIndicator.this.binder
|
||||
.getTopicsInUse();
|
||||
if (topicsInUse.isEmpty()) {
|
||||
@@ -148,11 +170,12 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
|
||||
downMessages.add(partitionInfo.toString());
|
||||
}
|
||||
}
|
||||
checkedTopics.add(topic);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (downMessages.isEmpty()) {
|
||||
return Health.up().build();
|
||||
return Health.up().withDetail("topicsInUse", checkedTopics).build();
|
||||
}
|
||||
else {
|
||||
return Health.down()
|
||||
@@ -166,6 +189,33 @@ public class KafkaBinderHealthIndicator implements HealthIndicator, DisposableBe
|
||||
}
|
||||
}
|
||||
|
||||
private Health buildListenerContainersHealth() {
|
||||
List<AbstractMessageListenerContainer<?, ?>> listenerContainers = binder.getKafkaMessageListenerContainers();
|
||||
if (listenerContainers.isEmpty()) {
|
||||
return Health.unknown().build();
|
||||
}
|
||||
|
||||
Status status = Status.UP;
|
||||
List<Map<String, Object>> containersDetails = new ArrayList<>();
|
||||
|
||||
for (AbstractMessageListenerContainer<?, ?> container : listenerContainers) {
|
||||
Map<String, Object> containerDetails = new HashMap<>();
|
||||
boolean isRunning = container.isRunning();
|
||||
if (!isRunning) {
|
||||
status = Status.DOWN;
|
||||
}
|
||||
containerDetails.put("isRunning", isRunning);
|
||||
containerDetails.put("isPaused", container.isContainerPaused());
|
||||
containerDetails.put("listenerId", container.getListenerId());
|
||||
containerDetails.put("groupId", container.getGroupId());
|
||||
|
||||
containersDetails.add(containerDetails);
|
||||
}
|
||||
return Health.status(status)
|
||||
.withDetail("listenerContainers", containersDetails)
|
||||
.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void destroy() throws Exception {
|
||||
executor.shutdown();
|
||||
|
||||
@@ -26,6 +26,7 @@ import java.util.concurrent.ExecutorService;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.Future;
|
||||
import java.util.concurrent.ScheduledExecutorService;
|
||||
import java.util.concurrent.ScheduledThreadPoolExecutor;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.TimeoutException;
|
||||
|
||||
@@ -92,9 +93,9 @@ public class KafkaBinderMetrics
|
||||
Map<String, Long> unconsumedMessages = new ConcurrentHashMap<>();
|
||||
|
||||
public KafkaBinderMetrics(KafkaMessageChannelBinder binder,
|
||||
KafkaBinderConfigurationProperties binderConfigurationProperties,
|
||||
ConsumerFactory<?, ?> defaultConsumerFactory,
|
||||
@Nullable MeterRegistry meterRegistry) {
|
||||
KafkaBinderConfigurationProperties binderConfigurationProperties,
|
||||
ConsumerFactory<?, ?> defaultConsumerFactory,
|
||||
@Nullable MeterRegistry meterRegistry) {
|
||||
|
||||
this.binder = binder;
|
||||
this.binderConfigurationProperties = binderConfigurationProperties;
|
||||
@@ -104,7 +105,7 @@ public class KafkaBinderMetrics
|
||||
}
|
||||
|
||||
public KafkaBinderMetrics(KafkaMessageChannelBinder binder,
|
||||
KafkaBinderConfigurationProperties binderConfigurationProperties) {
|
||||
KafkaBinderConfigurationProperties binderConfigurationProperties) {
|
||||
|
||||
this(binder, binderConfigurationProperties, null, null);
|
||||
}
|
||||
@@ -115,6 +116,15 @@ public class KafkaBinderMetrics
|
||||
|
||||
@Override
|
||||
public void bindTo(MeterRegistry registry) {
|
||||
/**
|
||||
* We can't just replace one scheduler with another.
|
||||
* Before and even after the old one is gathered by GC, it's threads still exist, consume memory and CPU resources to switch contexts.
|
||||
* Theoretically, as a result of processing n topics, there will be about (1+n)*n/2 threads simultaneously at the same time.
|
||||
*/
|
||||
if (this.scheduler != null) {
|
||||
LOG.info("Try to shutdown the old scheduler with " + ((ScheduledThreadPoolExecutor) scheduler).getPoolSize() + " threads");
|
||||
this.scheduler.shutdown();
|
||||
}
|
||||
|
||||
this.scheduler = Executors.newScheduledThreadPool(this.binder.getTopicsInUse().size());
|
||||
|
||||
@@ -210,7 +220,7 @@ public class KafkaBinderMetrics
|
||||
return lag;
|
||||
}
|
||||
|
||||
private synchronized ConsumerFactory<?, ?> createConsumerFactory() {
|
||||
private synchronized ConsumerFactory<?, ?> createConsumerFactory() {
|
||||
if (this.defaultConsumerFactory == null) {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2014-2019 the original author or authors.
|
||||
* Copyright 2014-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -30,6 +30,7 @@ import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.UUID;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Predicate;
|
||||
import java.util.regex.Pattern;
|
||||
@@ -79,7 +80,6 @@ import org.springframework.cloud.stream.config.ListenerContainerCustomizer;
|
||||
import org.springframework.cloud.stream.config.MessageSourceCustomizer;
|
||||
import org.springframework.cloud.stream.provisioning.ConsumerDestination;
|
||||
import org.springframework.cloud.stream.provisioning.ProducerDestination;
|
||||
import org.springframework.context.Lifecycle;
|
||||
import org.springframework.context.support.AbstractApplicationContext;
|
||||
import org.springframework.expression.Expression;
|
||||
import org.springframework.expression.common.LiteralExpression;
|
||||
@@ -114,7 +114,9 @@ import org.springframework.kafka.support.ProducerListener;
|
||||
import org.springframework.kafka.support.SendResult;
|
||||
import org.springframework.kafka.support.TopicPartitionOffset;
|
||||
import org.springframework.kafka.support.TopicPartitionOffset.SeekPosition;
|
||||
import org.springframework.kafka.support.converter.MessageConverter;
|
||||
import org.springframework.kafka.support.converter.MessagingMessageConverter;
|
||||
import org.springframework.kafka.support.converter.RecordMessageConverter;
|
||||
import org.springframework.kafka.transaction.KafkaAwareTransactionManager;
|
||||
import org.springframework.kafka.transaction.KafkaTransactionManager;
|
||||
import org.springframework.lang.Nullable;
|
||||
@@ -151,6 +153,7 @@ import org.springframework.util.concurrent.ListenableFutureCallback;
|
||||
* @author Henryk Konsek
|
||||
* @author Doug Saus
|
||||
* @author Lukasz Kaminski
|
||||
* @author Taras Danylchuk
|
||||
*/
|
||||
public class KafkaMessageChannelBinder extends
|
||||
// @checkstyle:off
|
||||
@@ -231,6 +234,8 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
private ConsumerConfigCustomizer consumerConfigCustomizer;
|
||||
|
||||
private final List<AbstractMessageListenerContainer<?, ?>> kafkaMessageListenerContainers = new ArrayList<>();
|
||||
|
||||
public KafkaMessageChannelBinder(
|
||||
KafkaBinderConfigurationProperties configurationProperties,
|
||||
KafkaTopicProvisioner provisioningProvider) {
|
||||
@@ -679,6 +684,8 @@ public class KafkaMessageChannelBinder extends
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
this.kafkaMessageListenerContainers.add(messageListenerContainer);
|
||||
messageListenerContainer.setConcurrency(concurrency);
|
||||
// these won't be needed if the container is made a bean
|
||||
AbstractApplicationContext applicationContext = getApplicationContext();
|
||||
@@ -694,8 +701,16 @@ public class KafkaMessageChannelBinder extends
|
||||
messageListenerContainer.setBeanName(destination + ".container");
|
||||
// end of these won't be needed...
|
||||
ContainerProperties.AckMode ackMode = extendedConsumerProperties.getExtension().getAckMode();
|
||||
if (ackMode == null && extendedConsumerProperties.getExtension().isAckEachRecord()) {
|
||||
ackMode = ContainerProperties.AckMode.RECORD;
|
||||
if (ackMode == null) {
|
||||
if (extendedConsumerProperties.getExtension().isAckEachRecord()) {
|
||||
ackMode = ContainerProperties.AckMode.RECORD;
|
||||
}
|
||||
else {
|
||||
if (!extendedConsumerProperties.getExtension().isAutoCommitOffset()) {
|
||||
messageListenerContainer.getContainerProperties()
|
||||
.setAckMode(ContainerProperties.AckMode.MANUAL);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (ackMode != null) {
|
||||
if ((extendedConsumerProperties.isBatchMode() && ackMode != ContainerProperties.AckMode.RECORD) ||
|
||||
@@ -712,7 +727,7 @@ public class KafkaMessageChannelBinder extends
|
||||
final KafkaMessageDrivenChannelAdapter<?, ?> kafkaMessageDrivenChannelAdapter =
|
||||
new KafkaMessageDrivenChannelAdapter<>(messageListenerContainer,
|
||||
extendedConsumerProperties.isBatchMode() ? ListenerMode.batch : ListenerMode.record);
|
||||
MessagingMessageConverter messageConverter = getMessageConverter(extendedConsumerProperties);
|
||||
MessageConverter messageConverter = getMessageConverter(extendedConsumerProperties);
|
||||
kafkaMessageDrivenChannelAdapter.setMessageConverter(messageConverter);
|
||||
kafkaMessageDrivenChannelAdapter.setBeanFactory(getBeanFactory());
|
||||
kafkaMessageDrivenChannelAdapter.setApplicationContext(applicationContext);
|
||||
@@ -731,7 +746,8 @@ public class KafkaMessageChannelBinder extends
|
||||
messageListenerContainer.setAfterRollbackProcessor(new DefaultAfterRollbackProcessor<>(
|
||||
(record, exception) -> {
|
||||
MessagingException payload =
|
||||
new MessagingException(messageConverter.toMessage(record, null, null, null),
|
||||
new MessagingException(((RecordMessageConverter) messageConverter)
|
||||
.toMessage(record, null, null, null),
|
||||
"Transaction rollback limit exceeded", exception);
|
||||
try {
|
||||
errorInfrastructure.getErrorChannel()
|
||||
@@ -990,7 +1006,10 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
KafkaMessageSource<?, ?> source = new KafkaMessageSource<>(consumerFactory,
|
||||
consumerProperties);
|
||||
source.setMessageConverter(getMessageConverter(extendedConsumerProperties));
|
||||
MessageConverter messageConverter = getMessageConverter(extendedConsumerProperties);
|
||||
Assert.isInstanceOf(RecordMessageConverter.class, messageConverter,
|
||||
"'messageConverter' must be a 'RecordMessageConverter' for polled consumers");
|
||||
source.setMessageConverter((RecordMessageConverter) messageConverter);
|
||||
source.setRawMessageHeader(extension.isEnableDlq());
|
||||
|
||||
if (!extendedConsumerProperties.isMultiplex()) {
|
||||
@@ -1027,32 +1046,35 @@ public class KafkaMessageChannelBinder extends
|
||||
});
|
||||
}
|
||||
|
||||
private MessagingMessageConverter getMessageConverter(
|
||||
private MessageConverter getMessageConverter(
|
||||
final ExtendedConsumerProperties<KafkaConsumerProperties> extendedConsumerProperties) {
|
||||
MessagingMessageConverter messageConverter;
|
||||
|
||||
MessageConverter messageConverter;
|
||||
if (extendedConsumerProperties.getExtension().getConverterBeanName() == null) {
|
||||
messageConverter = new MessagingMessageConverter();
|
||||
MessagingMessageConverter mmc = new MessagingMessageConverter();
|
||||
StandardHeaders standardHeaders = extendedConsumerProperties.getExtension()
|
||||
.getStandardHeaders();
|
||||
messageConverter
|
||||
.setGenerateMessageId(StandardHeaders.id.equals(standardHeaders)
|
||||
mmc.setGenerateMessageId(StandardHeaders.id.equals(standardHeaders)
|
||||
|| StandardHeaders.both.equals(standardHeaders));
|
||||
messageConverter.setGenerateTimestamp(
|
||||
mmc.setGenerateTimestamp(
|
||||
StandardHeaders.timestamp.equals(standardHeaders)
|
||||
|| StandardHeaders.both.equals(standardHeaders));
|
||||
messageConverter = mmc;
|
||||
}
|
||||
else {
|
||||
try {
|
||||
messageConverter = getApplicationContext().getBean(
|
||||
extendedConsumerProperties.getExtension().getConverterBeanName(),
|
||||
MessagingMessageConverter.class);
|
||||
MessageConverter.class);
|
||||
}
|
||||
catch (NoSuchBeanDefinitionException ex) {
|
||||
throw new IllegalStateException(
|
||||
"Converter bean not present in application context", ex);
|
||||
}
|
||||
}
|
||||
messageConverter.setHeaderMapper(getHeaderMapper(extendedConsumerProperties));
|
||||
if (messageConverter instanceof MessagingMessageConverter) {
|
||||
((MessagingMessageConverter) messageConverter).setHeaderMapper(getHeaderMapper(extendedConsumerProperties));
|
||||
}
|
||||
return messageConverter;
|
||||
}
|
||||
|
||||
@@ -1129,8 +1151,21 @@ public class KafkaMessageChannelBinder extends
|
||||
final KafkaTemplate<?, ?> kafkaTemplate = new KafkaTemplate<>(
|
||||
producerFactory);
|
||||
|
||||
Object timeout = producerFactory.getConfigurationProperties().get(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG);
|
||||
Long sendTimeout = null;
|
||||
if (timeout instanceof Number) {
|
||||
sendTimeout = ((Number) timeout).longValue() + 2000L;
|
||||
}
|
||||
else if (timeout instanceof String) {
|
||||
sendTimeout = Long.parseLong((String) timeout) + 2000L;
|
||||
}
|
||||
if (timeout == null) {
|
||||
sendTimeout = ((Integer) ProducerConfig.configDef()
|
||||
.defaultValues()
|
||||
.get(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG)).longValue() + 2000L;
|
||||
}
|
||||
@SuppressWarnings("rawtypes")
|
||||
DlqSender<?, ?> dlqSender = new DlqSender(kafkaTemplate);
|
||||
DlqSender<?, ?> dlqSender = new DlqSender(kafkaTemplate, sendTimeout);
|
||||
|
||||
return (message) -> {
|
||||
|
||||
@@ -1397,8 +1432,7 @@ public class KafkaMessageChannelBinder extends
|
||||
ExtendedConsumerProperties<KafkaConsumerProperties> properties) {
|
||||
return properties.getExtension().getAutoCommitOnError() != null
|
||||
? properties.getExtension().getAutoCommitOnError()
|
||||
: properties.getExtension().isAutoCommitOffset()
|
||||
&& properties.getExtension().isEnableDlq();
|
||||
: false;
|
||||
}
|
||||
|
||||
private TopicPartitionOffset[] getTopicPartitionOffsets(
|
||||
@@ -1445,6 +1479,10 @@ public class KafkaMessageChannelBinder extends
|
||||
this.producerConfigCustomizer = producerConfigCustomizer;
|
||||
}
|
||||
|
||||
List<AbstractMessageListenerContainer<?, ?>> getKafkaMessageListenerContainers() {
|
||||
return Collections.unmodifiableList(kafkaMessageListenerContainers);
|
||||
}
|
||||
|
||||
private final class ProducerConfigurationMessageHandler
|
||||
extends KafkaProducerMessageHandler<byte[], byte[]> {
|
||||
|
||||
@@ -1498,8 +1536,14 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
@Override
|
||||
public void stop() {
|
||||
if (this.producerFactory instanceof Lifecycle) {
|
||||
((Lifecycle) producerFactory).stop();
|
||||
if (this.producerFactory instanceof DisposableBean) {
|
||||
try {
|
||||
((DisposableBean) producerFactory).destroy();
|
||||
}
|
||||
catch (Exception ex) {
|
||||
this.logger.error(ex, "Error destroying the producer factory bean: ");
|
||||
throw new RuntimeException(ex);
|
||||
}
|
||||
}
|
||||
this.running = false;
|
||||
}
|
||||
@@ -1557,8 +1601,11 @@ public class KafkaMessageChannelBinder extends
|
||||
|
||||
private final KafkaTemplate<K, V> kafkaTemplate;
|
||||
|
||||
DlqSender(KafkaTemplate<K, V> kafkaTemplate) {
|
||||
private final long sendTimeout;
|
||||
|
||||
DlqSender(KafkaTemplate<K, V> kafkaTemplate, long timeout) {
|
||||
this.kafkaTemplate = kafkaTemplate;
|
||||
this.sendTimeout = timeout;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
@@ -1592,18 +1639,26 @@ public class KafkaMessageChannelBinder extends
|
||||
public void onSuccess(SendResult<K, V> result) {
|
||||
if (KafkaMessageChannelBinder.this.logger.isDebugEnabled()) {
|
||||
KafkaMessageChannelBinder.this.logger
|
||||
.debug("Sent to DLQ " + sb.toString());
|
||||
}
|
||||
if (ackMode == ContainerProperties.AckMode.MANUAL || ackMode == ContainerProperties.AckMode.MANUAL_IMMEDIATE) {
|
||||
messageHeaders.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class).acknowledge();
|
||||
.debug("Sent to DLQ " + sb.toString() + ": " + result.getRecordMetadata());
|
||||
}
|
||||
}
|
||||
});
|
||||
try {
|
||||
sentDlq.get(this.sendTimeout, TimeUnit.MILLISECONDS);
|
||||
}
|
||||
catch (InterruptedException ex) {
|
||||
Thread.currentThread().interrupt();
|
||||
throw ex;
|
||||
}
|
||||
}
|
||||
catch (Exception ex) {
|
||||
if (sentDlq == null) {
|
||||
KafkaMessageChannelBinder.this.logger
|
||||
.error("Error sending to DLQ " + sb.toString(), ex);
|
||||
KafkaMessageChannelBinder.this.logger
|
||||
.error("Error sending to DLQ " + sb.toString(), ex);
|
||||
}
|
||||
finally {
|
||||
if (ackMode == ContainerProperties.AckMode.MANUAL
|
||||
|| ackMode == ContainerProperties.AckMode.MANUAL_IMMEDIATE) {
|
||||
messageHeaders.get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class).acknowledge();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ import org.springframework.cloud.stream.binder.kafka.KafkaNullConverter;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.JaasLoginModuleConfiguration;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaBinderConfigurationProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.properties.KafkaExtendedBindingProperties;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.AdminClientConfigCustomizer;
|
||||
import org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqDestinationResolver;
|
||||
import org.springframework.cloud.stream.binder.kafka.utils.DlqPartitionFunction;
|
||||
@@ -104,8 +105,10 @@ public class KafkaBinderConfiguration {
|
||||
|
||||
@Bean
|
||||
KafkaTopicProvisioner provisioningProvider(
|
||||
KafkaBinderConfigurationProperties configurationProperties) {
|
||||
return new KafkaTopicProvisioner(configurationProperties, this.kafkaProperties);
|
||||
KafkaBinderConfigurationProperties configurationProperties,
|
||||
ObjectProvider<AdminClientConfigCustomizer> adminClientConfigCustomizer) {
|
||||
return new KafkaTopicProvisioner(configurationProperties,
|
||||
this.kafkaProperties, adminClientConfigCustomizer.getIfUnique());
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
|
||||
@@ -43,10 +43,10 @@ import org.springframework.util.ObjectUtils;
|
||||
@Configuration
|
||||
@ConditionalOnClass(name = "org.springframework.boot.actuate.health.HealthIndicator")
|
||||
@ConditionalOnEnabledHealthIndicator("binders")
|
||||
class KafkaBinderHealthIndicatorConfiguration {
|
||||
public class KafkaBinderHealthIndicatorConfiguration {
|
||||
|
||||
@Bean
|
||||
KafkaBinderHealthIndicator kafkaBinderHealthIndicator(
|
||||
public KafkaBinderHealthIndicator kafkaBinderHealthIndicator(
|
||||
KafkaMessageChannelBinder kafkaMessageChannelBinder,
|
||||
KafkaBinderConfigurationProperties configurationProperties) {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
org.springframework.boot.env.EnvironmentPostProcessor=\
|
||||
org.springframework.cloud.stream.binder.kafka.KafkaBinderEnvironmentPostProcessor
|
||||
org.springframework.boot.autoconfigure.EnableAutoConfiguration=org.springframework.cloud.stream.binder.kafka.config.ExtendedBindingHandlerMappingsProviderConfiguration
|
||||
org.springframework.boot.env.EnvironmentPostProcessor:\
|
||||
org.springframework.cloud.stream.binder.kafka.KafkaBinderEnvironmentPostProcessor
|
||||
org.springframework.boot.autoconfigure.EnableAutoConfiguration:\
|
||||
org.springframework.cloud.stream.binder.kafka.config.ExtendedBindingHandlerMappingsProviderConfiguration
|
||||
|
||||
@@ -65,7 +65,7 @@ public class AutoCreateTopicDisabledTests {
|
||||
configurationProperties.setAutoCreateTopics(false);
|
||||
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
configurationProperties, kafkaProperties);
|
||||
configurationProperties, kafkaProperties, null);
|
||||
provisioningProvider.setMetadataRetryOperations(new RetryTemplate());
|
||||
|
||||
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(
|
||||
@@ -97,7 +97,7 @@ public class AutoCreateTopicDisabledTests {
|
||||
configurationProperties.getConfiguration().put("max.block.ms", "3000");
|
||||
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
configurationProperties, kafkaProperties);
|
||||
configurationProperties, kafkaProperties, null);
|
||||
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy(1);
|
||||
final RetryTemplate metadataRetryOperations = new RetryTemplate();
|
||||
metadataRetryOperations.setRetryPolicy(simpleRetryPolicy);
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
* Copyright 2017-2018 the original author or authors.
|
||||
* Copyright 2017-2021 the original author or authors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -17,6 +17,8 @@
|
||||
package org.springframework.cloud.stream.binder.kafka;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
@@ -34,7 +36,9 @@ import org.mockito.MockitoAnnotations;
|
||||
import org.springframework.boot.actuate.health.Health;
|
||||
import org.springframework.boot.actuate.health.Status;
|
||||
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
|
||||
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
|
||||
|
||||
import static java.util.Collections.singleton;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
/**
|
||||
@@ -43,6 +47,7 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
* @author Laur Aliste
|
||||
* @author Soby Chacko
|
||||
* @author Chukwubuikem Ume-Ugwa
|
||||
* @author Taras Danylchuk
|
||||
*/
|
||||
public class KafkaBinderHealthIndicatorTest {
|
||||
|
||||
@@ -58,6 +63,12 @@ public class KafkaBinderHealthIndicatorTest {
|
||||
@Mock
|
||||
private KafkaConsumer consumer;
|
||||
|
||||
@Mock
|
||||
AbstractMessageListenerContainer<?, ?> listenerContainerA;
|
||||
|
||||
@Mock
|
||||
AbstractMessageListenerContainer<?, ?> listenerContainerB;
|
||||
|
||||
@Mock
|
||||
private KafkaMessageChannelBinder binder;
|
||||
|
||||
@@ -73,6 +84,21 @@ public class KafkaBinderHealthIndicatorTest {
|
||||
this.indicator.setTimeout(10);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void kafkaBinderIsUpWithNoConsumers() {
|
||||
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
|
||||
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation(
|
||||
"group1-healthIndicator", partitions, false));
|
||||
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC))
|
||||
.willReturn(partitions);
|
||||
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
|
||||
.willReturn(Collections.emptyList());
|
||||
|
||||
Health health = indicator.health();
|
||||
assertThat(health.getStatus()).isEqualTo(Status.UP);
|
||||
assertThat(health.getDetails()).containsEntry("topicsInUse", singleton(TEST_TOPIC));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void kafkaBinderIsUp() {
|
||||
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
|
||||
@@ -80,8 +106,42 @@ public class KafkaBinderHealthIndicatorTest {
|
||||
"group1-healthIndicator", partitions, false));
|
||||
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC))
|
||||
.willReturn(partitions);
|
||||
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
|
||||
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
|
||||
mockContainer(listenerContainerA, true);
|
||||
mockContainer(listenerContainerB, true);
|
||||
|
||||
Health health = indicator.health();
|
||||
assertThat(health.getStatus()).isEqualTo(Status.UP);
|
||||
assertThat(health.getDetails()).containsEntry("topicsInUse", singleton(TEST_TOPIC));
|
||||
assertThat(health.getDetails()).hasEntrySatisfying("listenerContainers", value ->
|
||||
assertThat((ArrayList<?>) value).hasSize(2));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void kafkaBinderIsDownWhenOneOfConsumersIsNotRunning() {
|
||||
final List<PartitionInfo> partitions = partitions(new Node(0, null, 0));
|
||||
topicsInUse.put(TEST_TOPIC, new KafkaMessageChannelBinder.TopicInformation(
|
||||
"group1-healthIndicator", partitions, false));
|
||||
org.mockito.BDDMockito.given(consumer.partitionsFor(TEST_TOPIC))
|
||||
.willReturn(partitions);
|
||||
org.mockito.BDDMockito.given(binder.getKafkaMessageListenerContainers())
|
||||
.willReturn(Arrays.asList(listenerContainerA, listenerContainerB));
|
||||
mockContainer(listenerContainerA, false);
|
||||
mockContainer(listenerContainerB, true);
|
||||
|
||||
Health health = indicator.health();
|
||||
assertThat(health.getStatus()).isEqualTo(Status.DOWN);
|
||||
assertThat(health.getDetails()).containsEntry("topicsInUse", singleton(TEST_TOPIC));
|
||||
assertThat(health.getDetails()).hasEntrySatisfying("listenerContainers", value ->
|
||||
assertThat((ArrayList<?>) value).hasSize(2));
|
||||
}
|
||||
|
||||
private void mockContainer(AbstractMessageListenerContainer<?, ?> container, boolean isRunning) {
|
||||
org.mockito.BDDMockito.given(container.isRunning()).willReturn(isRunning);
|
||||
org.mockito.BDDMockito.given(container.isContainerPaused()).willReturn(true);
|
||||
org.mockito.BDDMockito.given(container.getListenerId()).willReturn("someListenerId");
|
||||
org.mockito.BDDMockito.given(container.getGroupId()).willReturn("someGroupId");
|
||||
}
|
||||
|
||||
@Test
|
||||
|
||||
@@ -127,6 +127,7 @@ import org.springframework.kafka.support.KafkaHeaderMapper;
|
||||
import org.springframework.kafka.support.KafkaHeaders;
|
||||
import org.springframework.kafka.support.SendResult;
|
||||
import org.springframework.kafka.support.TopicPartitionOffset;
|
||||
import org.springframework.kafka.support.converter.BatchMessagingMessageConverter;
|
||||
import org.springframework.kafka.support.converter.MessagingMessageConverter;
|
||||
import org.springframework.kafka.test.core.BrokerAddress;
|
||||
import org.springframework.kafka.test.rule.EmbeddedKafkaRule;
|
||||
@@ -209,7 +210,7 @@ public class KafkaBinderTests extends
|
||||
if (binder == null) {
|
||||
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
binderConfiguration, new TestKafkaProperties(), null);
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
@@ -232,7 +233,7 @@ public class KafkaBinderTests extends
|
||||
DlqPartitionFunction dlqPartitionFunction, DlqDestinationResolver dlqDestinationResolver) {
|
||||
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
kafkaBinderConfigurationProperties, new TestKafkaProperties());
|
||||
kafkaBinderConfigurationProperties, new TestKafkaProperties(), null);
|
||||
try {
|
||||
provisioningProvider.afterPropertiesSet();
|
||||
}
|
||||
@@ -401,7 +402,7 @@ public class KafkaBinderTests extends
|
||||
binderConfiguration.setHeaderMapperBeanName("headerMapper");
|
||||
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
binderConfiguration, new TestKafkaProperties(), null);
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
@@ -478,7 +479,7 @@ public class KafkaBinderTests extends
|
||||
KafkaBinderConfigurationProperties binderConfiguration = createConfigurationProperties();
|
||||
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
binderConfiguration, new TestKafkaProperties(), null);
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
@@ -612,6 +613,11 @@ public class KafkaBinderTests extends
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
MessagingMessageConverter mmc = new MessagingMessageConverter();
|
||||
((GenericApplicationContext) ((KafkaTestBinder) binder).getApplicationContext())
|
||||
.registerBean("tSARmmc", MessagingMessageConverter.class, () -> mmc);
|
||||
consumerProperties.getExtension().setConverterBeanName("tSARmmc");
|
||||
|
||||
Binding<MessageChannel> producerBinding = binder.bindProducer("foo.bar",
|
||||
moduleOutputChannel, outputBindingProperties.getProducer());
|
||||
Binding<MessageChannel> consumerBinding = binder.bindConsumer("foo.bar",
|
||||
@@ -653,6 +659,8 @@ public class KafkaBinderTests extends
|
||||
assertThat(topic.isConsumerTopic()).isTrue();
|
||||
assertThat(topic.getConsumerGroup()).isEqualTo("testSendAndReceive");
|
||||
|
||||
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding, "lifecycle.recordListener.messageConverter"))
|
||||
.isSameAs(mmc);
|
||||
producerBinding.unbind();
|
||||
consumerBinding.unbind();
|
||||
}
|
||||
@@ -670,6 +678,10 @@ public class KafkaBinderTests extends
|
||||
consumerProperties.getExtension().getConfiguration().put("fetch.min.bytes", "1000");
|
||||
consumerProperties.getExtension().getConfiguration().put("fetch.max.wait.ms", "5000");
|
||||
consumerProperties.getExtension().getConfiguration().put("max.poll.records", "2");
|
||||
BatchMessagingMessageConverter bmmc = new BatchMessagingMessageConverter();
|
||||
((GenericApplicationContext) ((KafkaTestBinder) binder).getApplicationContext())
|
||||
.registerBean("tSARBbmmc", BatchMessagingMessageConverter.class, () -> bmmc);
|
||||
consumerProperties.getExtension().setConverterBeanName("tSARBbmmc");
|
||||
DirectChannel moduleInputChannel = createBindableChannel("input",
|
||||
createConsumerBindingProperties(consumerProperties));
|
||||
|
||||
@@ -709,6 +721,8 @@ public class KafkaBinderTests extends
|
||||
assertThat(payload.get(1)).isEqualTo("bar".getBytes());
|
||||
}
|
||||
|
||||
assertThat(KafkaTestUtils.getPropertyValue(consumerBinding, "lifecycle.batchListener.batchMessageConverter"))
|
||||
.isSameAs(bmmc);
|
||||
producerBinding.unbind();
|
||||
consumerBinding.unbind();
|
||||
}
|
||||
@@ -3656,7 +3670,7 @@ public class KafkaBinderTests extends
|
||||
binderConfiguration.setHeaderMapperBeanName("headerMapper");
|
||||
|
||||
KafkaTopicProvisioner kafkaTopicProvisioner = new KafkaTopicProvisioner(
|
||||
binderConfiguration, new TestKafkaProperties());
|
||||
binderConfiguration, new TestKafkaProperties(), null);
|
||||
try {
|
||||
kafkaTopicProvisioner.afterPropertiesSet();
|
||||
}
|
||||
|
||||
@@ -78,7 +78,7 @@ public class KafkaBinderUnitTests {
|
||||
KafkaBinderConfigurationProperties binderConfigurationProperties = new KafkaBinderConfigurationProperties(
|
||||
kafkaProperties);
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
binderConfigurationProperties, kafkaProperties);
|
||||
binderConfigurationProperties, kafkaProperties, null);
|
||||
KafkaMessageChannelBinder binder = new KafkaMessageChannelBinder(
|
||||
binderConfigurationProperties, provisioningProvider);
|
||||
KafkaConsumerProperties consumerProps = new KafkaConsumerProperties();
|
||||
|
||||
@@ -73,7 +73,7 @@ public class KafkaTransactionTests {
|
||||
configurationProperties.getTransaction().setTransactionIdPrefix("foo-");
|
||||
configurationProperties.getTransaction().getProducer().setUseNativeEncoding(true);
|
||||
KafkaTopicProvisioner provisioningProvider = new KafkaTopicProvisioner(
|
||||
configurationProperties, kafkaProperties);
|
||||
configurationProperties, kafkaProperties, null);
|
||||
provisioningProvider.setMetadataRetryOperations(new RetryTemplate());
|
||||
final Producer mockProducer = mock(Producer.class);
|
||||
given(mockProducer.send(any(), any())).willReturn(new SettableListenableFuture<>());
|
||||
|
||||
Reference in New Issue
Block a user