Compare commits
1 Commits
v2.2.0.M1
...
v2.1.0.RC2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
633f19b294 |
34
README.adoc
34
README.adoc
@@ -21,7 +21,9 @@ It contains information about its design, usage, and configuration options, as w
|
||||
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
|
||||
--
|
||||
|
||||
== Usage
|
||||
== Apache Kafka Binder
|
||||
|
||||
=== Usage
|
||||
|
||||
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
|
||||
|
||||
@@ -43,7 +45,7 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
|
||||
</dependency>
|
||||
----
|
||||
|
||||
== Apache Kafka Binder Overview
|
||||
=== Overview
|
||||
|
||||
The following image shows a simplified diagram of how the Apache Kafka binder operates:
|
||||
|
||||
@@ -59,13 +61,13 @@ This client can communicate with older brokers (see the Kafka documentation), bu
|
||||
For example, with versions earlier than 0.11.x.x, native headers are not supported.
|
||||
Also, 0.11.x.x does not support the `autoAddPartitions` property.
|
||||
|
||||
== Configuration Options
|
||||
=== Configuration Options
|
||||
|
||||
This section contains the configuration options used by the Apache Kafka binder.
|
||||
|
||||
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
|
||||
|
||||
=== Kafka Binder Properties
|
||||
==== Kafka Binder Properties
|
||||
|
||||
spring.cloud.stream.kafka.binder.brokers::
|
||||
A list of brokers to which the Kafka binder connects.
|
||||
@@ -154,7 +156,7 @@ Use this, for example, if you wish to customize the trusted packages in a `Defau
|
||||
Default: none.
|
||||
|
||||
[[kafka-consumer-properties]]
|
||||
=== Kafka Consumer Properties
|
||||
==== Kafka Consumer Properties
|
||||
|
||||
The following properties are available for Kafka consumers only and
|
||||
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
|
||||
@@ -270,7 +272,7 @@ This can be configured using the `configuration` property above.
|
||||
Default: `false`
|
||||
|
||||
[[kafka-producer-properties]]
|
||||
=== Kafka Producer Properties
|
||||
==== Kafka Producer Properties
|
||||
|
||||
The following properties are available for Kafka producers only and
|
||||
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
|
||||
@@ -331,11 +333,11 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
|
||||
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
|
||||
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
|
||||
|
||||
=== Usage examples
|
||||
==== Usage examples
|
||||
|
||||
In this section, we show the use of the preceding properties for specific scenarios.
|
||||
|
||||
==== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
|
||||
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
|
||||
|
||||
This example illustrates how one may manually acknowledge offsets in a consumer application.
|
||||
|
||||
@@ -363,7 +365,7 @@ public class ManuallyAcknowdledgingConsumer {
|
||||
}
|
||||
----
|
||||
|
||||
==== Example: Security Configuration
|
||||
===== Example: Security Configuration
|
||||
|
||||
Apache Kafka 0.9 supports secure connections between client and brokers.
|
||||
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
|
||||
@@ -382,7 +384,7 @@ When using Kerberos, follow the instructions in the http://kafka.apache.org/090/
|
||||
|
||||
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
|
||||
|
||||
===== Using JAAS Configuration Files
|
||||
====== Using JAAS Configuration Files
|
||||
|
||||
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
|
||||
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
|
||||
@@ -395,7 +397,7 @@ The following example shows how to launch a Spring Cloud Stream application with
|
||||
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
|
||||
----
|
||||
|
||||
===== Using Spring Boot Properties
|
||||
====== Using Spring Boot Properties
|
||||
|
||||
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
|
||||
|
||||
@@ -452,7 +454,7 @@ Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
|
||||
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
|
||||
|
||||
[[pause-resume]]
|
||||
==== Example: Pausing and Resuming the Consumer
|
||||
===== Example: Pausing and Resuming the Consumer
|
||||
|
||||
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
|
||||
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
|
||||
@@ -492,7 +494,7 @@ public class Application {
|
||||
----
|
||||
|
||||
[[kafka-error-channels]]
|
||||
== Error Channels
|
||||
=== Error Channels
|
||||
|
||||
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
|
||||
See <<spring-cloud-stream-overview-error-handling>> for more information.
|
||||
@@ -506,7 +508,7 @@ There is no automatic handling of producer exceptions (such as sending to a <<ka
|
||||
You can consume these exceptions with your own Spring Integration flow.
|
||||
|
||||
[[kafka-metrics]]
|
||||
== Kafka Metrics
|
||||
=== Kafka Metrics
|
||||
|
||||
Kafka binder module exposes the following metrics:
|
||||
|
||||
@@ -515,7 +517,7 @@ The metrics provided are based on the Mircometer metrics library. The metric con
|
||||
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
|
||||
|
||||
[[kafka-tombstones]]
|
||||
== Tombstone Records (null record values)
|
||||
=== Tombstone Records (null record values)
|
||||
|
||||
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
|
||||
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
|
||||
@@ -533,7 +535,7 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
|
||||
====
|
||||
|
||||
[[rebalance-listener]]
|
||||
== Using a KafkaRebalanceListener
|
||||
=== Using a KafkaRebalanceListener
|
||||
|
||||
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
|
||||
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
[[kafka-dlq-processing]]
|
||||
== Dead-Letter Topic Processing
|
||||
=== Dead-Letter Topic Processing
|
||||
|
||||
Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them.
|
||||
If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic.
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
== Usage
|
||||
== Kafka Streams Binder
|
||||
|
||||
=== Usage
|
||||
|
||||
For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following
|
||||
Maven coordinates:
|
||||
@@ -11,7 +13,8 @@ Maven coordinates:
|
||||
</dependency>
|
||||
----
|
||||
|
||||
== Kafka Streams Binder Overview
|
||||
|
||||
=== Overview
|
||||
|
||||
Spring Cloud Stream's Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka
|
||||
Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the
|
||||
@@ -32,7 +35,7 @@ As noted early-on, Kafka Streams support in Spring Cloud Stream is strictly only
|
||||
A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages
|
||||
can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.
|
||||
|
||||
=== Streams DSL
|
||||
==== Streams DSL
|
||||
|
||||
This application consumes data from a Kafka topic (e.g., `words`), computes word count for each unique word in a 5 seconds
|
||||
time window, and the computed results are sent to a downstream topic (e.g., `counts`) for further processing.
|
||||
@@ -75,13 +78,13 @@ KStream objects. As a developer, you can exclusively focus on the business aspec
|
||||
required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure
|
||||
is automatically handled by the framework.
|
||||
|
||||
== Configuration Options
|
||||
=== Configuration Options
|
||||
|
||||
This section contains the configuration options used by the Kafka Streams binder.
|
||||
|
||||
For common configuration options and properties pertaining to binder, refer to the <<binding-properties,core documentation>>.
|
||||
|
||||
=== Kafka Streams Properties
|
||||
==== Kafka Streams Properties
|
||||
|
||||
The following properties are available at the binder level and must be prefixed with `spring.cloud.stream.kafka.streams.binder.`
|
||||
|
||||
@@ -172,7 +175,7 @@ Default: `earliest`.
|
||||
Note: Using `resetOffsets` on the consumer does not have any effect on Kafka Streams binder.
|
||||
Unlike the message channel based binder, Kafka Streams binder does not seek to beginning or end on demand.
|
||||
|
||||
=== TimeWindow properties:
|
||||
==== TimeWindow properties:
|
||||
|
||||
Windowing is an important concept in stream processing applications. Following properties are available to configure
|
||||
time-window computations.
|
||||
@@ -187,14 +190,14 @@ spring.cloud.stream.kafka.streams.timeWindow.advanceBy::
|
||||
+
|
||||
Default: `none`.
|
||||
|
||||
== Multiple Input Bindings
|
||||
=== Multiple Input Bindings
|
||||
|
||||
For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka
|
||||
Streams binder provides multiple bindings support.
|
||||
|
||||
Let's see it in action.
|
||||
|
||||
=== Multiple Input Bindings as a Sink
|
||||
==== Multiple Input Bindings as a Sink
|
||||
|
||||
[source]
|
||||
----
|
||||
@@ -261,7 +264,7 @@ interface KStreamKTableBinding extends KafkaStreamsProcessor {
|
||||
|
||||
----
|
||||
|
||||
== Multiple Output Bindings (aka Branching)
|
||||
=== Multiple Output Bindings (aka Branching)
|
||||
|
||||
Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides
|
||||
support for this feature without compromising the programming model exposed through `StreamListener` in the end user application.
|
||||
@@ -347,7 +350,7 @@ spring.cloud.stream.bindings.input:
|
||||
headerMode: raw
|
||||
----
|
||||
|
||||
== Record Value Conversion
|
||||
=== Record Value Conversion
|
||||
|
||||
Kafka Streams binder can marshal producer/consumer values based on a content type and the converters provided out of the box in Spring Cloud Stream.
|
||||
|
||||
@@ -359,7 +362,7 @@ would like to continue using that for inbound and outbound conversions.
|
||||
|
||||
Both the options are supported in the Kafka Streams binder implementation. See below for more details.
|
||||
|
||||
==== Outbound serialization
|
||||
===== Outbound serialization
|
||||
|
||||
If native encoding is disabled (which is the default), then the framework will convert the message using the contentType
|
||||
set by the user (otherwise, the default `application/json` will be applied). It will ignore any SerDe set on the outbound
|
||||
@@ -448,7 +451,7 @@ spring.cloud.stream.bindings.output2.contentType: application/java-serialzied-ob
|
||||
spring.cloud.stream.bindings.output3.contentType: application/octet-stream
|
||||
----
|
||||
|
||||
==== Inbound Deserialization
|
||||
===== Inbound Deserialization
|
||||
|
||||
Similar rules apply to data deserialization on the inbound.
|
||||
|
||||
@@ -503,7 +506,7 @@ As in the case of KStream branching on the outbound, the benefit of setting valu
|
||||
multiple input bindings (multiple KStreams object) and they all require separate value SerDe's, then you can configure
|
||||
them individually. If you use the common configuration approach, then this feature won't be applicable.
|
||||
|
||||
== Error Handling
|
||||
=== Error Handling
|
||||
|
||||
Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors.
|
||||
For details on this support, please see https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers[this]
|
||||
@@ -544,7 +547,7 @@ that if there are multiple `StreamListener` methods in the same application, thi
|
||||
* The exception handling for deserialization works consistently with native deserialization and framework provided message
|
||||
conversion.
|
||||
|
||||
=== Handling Non-Deserialization Exceptions
|
||||
==== Handling Non-Deserialization Exceptions
|
||||
|
||||
For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors.
|
||||
As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get
|
||||
@@ -594,7 +597,7 @@ public KStream<?, WordCount> process(KStream<Object, String> input) {
|
||||
}
|
||||
----
|
||||
|
||||
== State Store
|
||||
=== State Store
|
||||
|
||||
State store is created automatically by Kafka Streams when the DSL is used.
|
||||
When processor API is used, you need to register a state store manually. In order to do so, you can use `KafkaStreamsStateStore` annotation.
|
||||
@@ -626,7 +629,7 @@ Processor<Object, Product>() {
|
||||
}
|
||||
----
|
||||
|
||||
== Interactive Queries
|
||||
=== Interactive Queries
|
||||
|
||||
As part of the public Kafka Streams binder API, we expose a class called `InteractiveQueryService`.
|
||||
You can access this as a Spring bean in your application. An easy way to get access to this bean from your application is to "autowire" the bean.
|
||||
@@ -671,7 +674,7 @@ else {
|
||||
}
|
||||
----
|
||||
|
||||
== Accessing the underlying KafkaStreams object
|
||||
=== Accessing the underlying KafkaStreams object
|
||||
|
||||
`StreamBuilderFactoryBean` from spring-kafka that is responsible for constructing the `KafkaStreams` object can be accessed programmatically.
|
||||
Each `StreamBuilderFactoryBean` is registered as `stream-builder` and appended with the `StreamListener` method name.
|
||||
@@ -685,7 +688,7 @@ StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-b
|
||||
KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();
|
||||
----
|
||||
|
||||
== State Cleanup
|
||||
=== State Cleanup
|
||||
|
||||
By default, the `Kafkastreams.cleanup()` method is called when the binding is stopped.
|
||||
See https://docs.spring.io/spring-kafka/reference/html/_reference.html#_configuration[the Spring Kafka documentation].
|
||||
|
||||
@@ -5,7 +5,9 @@ It contains information about its design, usage, and configuration options, as w
|
||||
In addition, this guide explains the Kafka Streams binding capabilities of Spring Cloud Stream.
|
||||
--
|
||||
|
||||
== Usage
|
||||
== Apache Kafka Binder
|
||||
|
||||
=== Usage
|
||||
|
||||
To use Apache Kafka binder, you need to add `spring-cloud-stream-binder-kafka` as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:
|
||||
|
||||
@@ -27,7 +29,7 @@ Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown
|
||||
</dependency>
|
||||
----
|
||||
|
||||
== Apache Kafka Binder Overview
|
||||
=== Overview
|
||||
|
||||
The following image shows a simplified diagram of how the Apache Kafka binder operates:
|
||||
|
||||
@@ -43,13 +45,13 @@ This client can communicate with older brokers (see the Kafka documentation), bu
|
||||
For example, with versions earlier than 0.11.x.x, native headers are not supported.
|
||||
Also, 0.11.x.x does not support the `autoAddPartitions` property.
|
||||
|
||||
== Configuration Options
|
||||
=== Configuration Options
|
||||
|
||||
This section contains the configuration options used by the Apache Kafka binder.
|
||||
|
||||
For common configuration options and properties pertaining to binder, see the <<binding-properties,core documentation>>.
|
||||
|
||||
=== Kafka Binder Properties
|
||||
==== Kafka Binder Properties
|
||||
|
||||
spring.cloud.stream.kafka.binder.brokers::
|
||||
A list of brokers to which the Kafka binder connects.
|
||||
@@ -138,7 +140,7 @@ Use this, for example, if you wish to customize the trusted packages in a `Defau
|
||||
Default: none.
|
||||
|
||||
[[kafka-consumer-properties]]
|
||||
=== Kafka Consumer Properties
|
||||
==== Kafka Consumer Properties
|
||||
|
||||
The following properties are available for Kafka consumers only and
|
||||
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.consumer.`.
|
||||
@@ -254,7 +256,7 @@ This can be configured using the `configuration` property above.
|
||||
Default: `false`
|
||||
|
||||
[[kafka-producer-properties]]
|
||||
=== Kafka Producer Properties
|
||||
==== Kafka Producer Properties
|
||||
|
||||
The following properties are available for Kafka producers only and
|
||||
must be prefixed with `spring.cloud.stream.kafka.bindings.<channelName>.producer.`.
|
||||
@@ -315,11 +317,11 @@ If a topic already exists with a smaller partition count and `autoAddPartitions`
|
||||
If a topic already exists with a smaller partition count and `autoAddPartitions` is enabled, new partitions are added.
|
||||
If a topic already exists with a larger number of partitions than the maximum of (`minPartitionCount` or `partitionCount`), the existing partition count is used.
|
||||
|
||||
=== Usage examples
|
||||
==== Usage examples
|
||||
|
||||
In this section, we show the use of the preceding properties for specific scenarios.
|
||||
|
||||
==== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
|
||||
===== Example: Setting `autoCommitOffset` to `false` and Relying on Manual Acking
|
||||
|
||||
This example illustrates how one may manually acknowledge offsets in a consumer application.
|
||||
|
||||
@@ -347,7 +349,7 @@ public class ManuallyAcknowdledgingConsumer {
|
||||
}
|
||||
----
|
||||
|
||||
==== Example: Security Configuration
|
||||
===== Example: Security Configuration
|
||||
|
||||
Apache Kafka 0.9 supports secure connections between client and brokers.
|
||||
To take advantage of this feature, follow the guidelines in the http://kafka.apache.org/090/documentation.html#security_configclients[Apache Kafka Documentation] as well as the Kafka 0.9 http://docs.confluent.io/2.0.0/kafka/security.html[security guidelines from the Confluent documentation].
|
||||
@@ -366,7 +368,7 @@ When using Kerberos, follow the instructions in the http://kafka.apache.org/090/
|
||||
|
||||
Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.
|
||||
|
||||
===== Using JAAS Configuration Files
|
||||
====== Using JAAS Configuration Files
|
||||
|
||||
The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties.
|
||||
The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:
|
||||
@@ -379,7 +381,7 @@ The following example shows how to launch a Spring Cloud Stream application with
|
||||
--spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT
|
||||
----
|
||||
|
||||
===== Using Spring Boot Properties
|
||||
====== Using Spring Boot Properties
|
||||
|
||||
As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.
|
||||
|
||||
@@ -436,7 +438,7 @@ Consequently, relying on Spring Cloud Stream to create/modify topics may fail.
|
||||
In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.
|
||||
|
||||
[[pause-resume]]
|
||||
==== Example: Pausing and Resuming the Consumer
|
||||
===== Example: Pausing and Resuming the Consumer
|
||||
|
||||
If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer.
|
||||
This is facilitated by adding the `Consumer` as a parameter to your `@StreamListener`.
|
||||
@@ -476,7 +478,7 @@ public class Application {
|
||||
----
|
||||
|
||||
[[kafka-error-channels]]
|
||||
== Error Channels
|
||||
=== Error Channels
|
||||
|
||||
Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel.
|
||||
See <<spring-cloud-stream-overview-error-handling>> for more information.
|
||||
@@ -490,7 +492,7 @@ There is no automatic handling of producer exceptions (such as sending to a <<ka
|
||||
You can consume these exceptions with your own Spring Integration flow.
|
||||
|
||||
[[kafka-metrics]]
|
||||
== Kafka Metrics
|
||||
=== Kafka Metrics
|
||||
|
||||
Kafka binder module exposes the following metrics:
|
||||
|
||||
@@ -499,7 +501,7 @@ The metrics provided are based on the Mircometer metrics library. The metric con
|
||||
This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.
|
||||
|
||||
[[kafka-tombstones]]
|
||||
== Tombstone Records (null record values)
|
||||
=== Tombstone Records (null record values)
|
||||
|
||||
When using compacted topics, a record with a `null` value (also called a tombstone record) represents the deletion of a key.
|
||||
To receive such messages in a `@StreamListener` method, the parameter must be marked as not required to receive a `null` value argument.
|
||||
@@ -517,7 +519,7 @@ public void in(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) byte[] key,
|
||||
====
|
||||
|
||||
[[rebalance-listener]]
|
||||
== Using a KafkaRebalanceListener
|
||||
=== Using a KafkaRebalanceListener
|
||||
|
||||
Applications may wish to seek topics/partitions to arbitrary offsets when the partitions are initially assigned, or perform other operations on the consumer.
|
||||
Starting with version 2.1, if you provide a single `KafkaRebalanceListener` bean in the application context, it will be wired into all Kafka consumer bindings.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
== Partitioning with the Kafka Binder
|
||||
=== Partitioning with the Kafka Binder
|
||||
|
||||
Apache Kafka supports topic partitioning natively.
|
||||
|
||||
|
||||
@@ -38,6 +38,8 @@ include::dlq.adoc[]
|
||||
|
||||
include::partitions.adoc[]
|
||||
|
||||
include::kafka-streams.adoc[]
|
||||
|
||||
= Appendices
|
||||
[appendix]
|
||||
include::building.adoc[]
|
||||
|
||||
Reference in New Issue
Block a user