Spring Cloud Stream Partitioning Sample
========================================
This is a collection of applications that demonstrates how partitioning works in Spring Cloud Stream.
## Quick introduction
The producer used in the sample produces messages with text that has a length of 1, 2, 3 or 4.
There is a configuration in the producer's application.yml file for `partition-key-expression` that uses the length of the payload minus 1 as the partition key expression to use.
This value will be used by the binder for selecting the correct partition based on the total number of partitions configured on the destination at the broker.
We use 4 partitions for this demo.
There is a common producer module called partitioning-producer and then there is a consumer for kafka and rabbit - partitioning-consumer-kafka and partitioning-consumer-rabbit respectively.
Follow the instructions below to run the demo for Kafka or RabbitMQ.
## Running the sample for Kafka
The following instructions assume that you are running Kafka as a Docker image.
* `docker-compose up -d`
* cd partitioning-consumer-kafka
* `./mvnw clean package`
* `java -jar target/partitioning-consumer-kafka-0.0.1-SNAPSHOT.jar --server.port=9008`
On another termimal start another instance of the consumer.
* `java -jar target/partitioning-consumer-kafka-0.0.1-SNAPSHOT.jar --server.port=9009`
* cd ../partitioning-producer
* `./mvnw clean package`
* `java -jar target/partitioning-producer-0.0.1-SNAPSHOT.jar --server.port=9010`
Producer sends messages randomly that has string length of 1, 2, 3, or 4.
Watch the consumer console logs and verify that the correct partitions are receiving the messages.
The log message has the payload and partition information in it.
Once you are done testing, stop all the instances.
* `docker-compose down`
## Running the sample for Rabbit
The following instructions assume that you are running Rabbit as a Docker image.
Make sure that you are at the root directory of partitioning samples (partitioning-samples)
Rabbit partitioning demo is slightly different from Kafka.
We need to spin up 4 consumers for each of the four partitions.
* `docker-compose -f docker-compose-rabbit.yml up -d`
* cd partitioning-consumer-rabbit
* `./mvnw clean package`
* `java -jar target/partitioning-consumer-rabbit-0.0.1-SNAPSHOT.jar --server.port=9005`
On another terminal start another instance of the consumer.
* `java -jar target/partitioning-consumer-rabbit-0.0.1-SNAPSHOT.jar --server.port=9006 --spring.cloud.stream.bindings.input.consumer.instanceIndex=1`
On another terminal start another instance of the consumer.
* `java -jar target/partitioning-consumer-rabbit-0.0.1-SNAPSHOT.jar --server.port=9007 --spring.cloud.stream.bindings.input.consumer.instanceIndex=2`
On another terminal start yet another instance of the consumer.
* `java -jar target/partitioning-consumer-rabbit-0.0.1-SNAPSHOT.jar --server.port=9008 --spring.cloud.stream.bindings.input.consumer.instanceIndex=3`
* cd ../partitioning-producer
* `./mvnw clean package -P rabbit-binder`
* `java -jar target/partitioning-producer-0.0.1-SNAPSHOT.jar --server.port=9010`
Producer sends messages randomly that has string length of 1, 2, 3, or 4.
Watch the consumer console logs and verify that the correct instances are receiving the messages.
The first consumer we started should receive messages with a string length of 1 (partition-0), second consumer with `instanceIndex` set to 1 should receive messages with string length of 2 (partittion-1) so on and so forth.
Once you are done testing, stop all the instances.
* `docker-compose -f docker-compose-rabbit.yml down`