thirdparty directory gitignore 처리
1
.gitignore
vendored
@@ -1,4 +1,5 @@
|
||||
HELP.md
|
||||
thirdparty/
|
||||
.gradle
|
||||
build/
|
||||
src/main/generated/
|
||||
|
||||
218
document/kafka/connect.md
Normal file
@@ -0,0 +1,218 @@
|
||||
[이전 장(링크)](https://imprint.tistory.com/232) 에서는 `Kafka`의 Producer와 Consumer를 활용하여 데이터를 전송하는 방법에 대해서 알아보았다.
|
||||
이번 장에서는 `Kafka`의 `Connect`를 사용하기 위해 설정하는 방법에 대해서 알아본다.
|
||||
모든 소스 코드는 [깃 허브 (링크)](https://github.com/roy-zz/spring-cloud) 에 올려두었다.
|
||||
|
||||
---
|
||||
|
||||
### Kafka Connect
|
||||
|
||||
우리는 일반적으로 데이터를 어느 한 쪽에서 다른 쪽으로 이동시키기 위해 코드를 작성해서 하나씩 전달한다.
|
||||
하지만 `Kafka Connect`를 사용하면 코드 작성없이 Configuration 정보를 통해서 데이터의 Import와 Export가 가능해진다.
|
||||
단일 클러스터를 위한 `Standalone mode`와 다중 클러스터를 위한 `Distribution mode`를 지원한다.
|
||||
RESTful API를 통해 이러한 기능들을 지원하며 Stream 또는 Batch 형태로 데이터의 전송이 가능하다.
|
||||
여러 Plugin을 통해서 S3, Hive 등과 같이 다양한 엔드포인트를 제공하고 있다.
|
||||
|
||||

|
||||
|
||||
우리는 이러한 Connect 기능을 사용하기 위해 주문 서비스에 MariaDB를 설치하고 Connect를 연동해 볼 것이다.
|
||||
|
||||
---
|
||||
|
||||
### Order Service
|
||||
|
||||
#### MariaDB 설치
|
||||
|
||||
프로젝트를 실행시키는 macOS 환경에 MariaDB를 설치해본다.
|
||||
|
||||
1. MariaDB 설치
|
||||
|
||||
아래의 커맨드를 입력하여 `Brew`를 통해 MariaDB를 설치해본다.
|
||||
|
||||
```bash
|
||||
$ brew install mariadb
|
||||
```
|
||||
|
||||
아래의 이미지와 같이 출력된다면 정상적으로 설치가 완료된 것이다.
|
||||
|
||||

|
||||
|
||||
2. MariaDB 실행
|
||||
|
||||
MariaDB를 실행시키는 방법은 여러가지가 있지만 `Brew`를 통해 설치하였기 때문에 실행 또한 `Brew`를 통해서 진행한다.
|
||||
|
||||
```bash
|
||||
$ brew services start mariadb
|
||||
```
|
||||
|
||||

|
||||
|
||||
**MariaDB 종료**: ```bash $ brew services stop mariadb```
|
||||
**MariaDB 상태 조회**: ```bash $ brew services info mariadb```
|
||||
|
||||
3. 접속
|
||||
|
||||
아래의 커맨드를 입력하여 정상적으로 접속되는지 확인해본다.
|
||||
|
||||
```bash
|
||||
$ mysql -uroot
|
||||
```
|
||||
|
||||
만약 필자와 같이 `Access denied` 오류가 발생한다면 아래와 같이 해결한다.
|
||||
|
||||
```bash
|
||||
$ sudo mysql -u root
|
||||
$ select user, host, plugin FROM mysql.user;
|
||||
$ set password for 'root'@'localhost'=password('root');
|
||||
$ flush privileges;
|
||||
$ exit
|
||||
```
|
||||
|
||||
비밀번호를 지정하는 부분은 편한 비밀번호를 지정하면 된다.
|
||||
아래와 같이 비밀번호를 -p를 추가로 입력하고 다시 접속하면 비밀번호를 입력하라는 화면이 나온다.
|
||||
위에서 지정한 비밀번호를 입력하고 접속한다.
|
||||
|
||||

|
||||
|
||||
4. database 생성
|
||||
|
||||
아래의 커맨드를 입력하여 주문 서비스에서 사용할 `mydb`라는 데이터베이스를 생성한다.
|
||||
|
||||
```bash
|
||||
$ create database mydb;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 주문 서비스
|
||||
|
||||
기존에 주문 서비스는 임베디드 형식의 H2 DB를 사용하고 있었다.
|
||||
Kafka의 Connect 기능을 사용하기 위해 위에서 설치한 MariaDB를 사용하도록 수정한다.
|
||||
|
||||
1. 의존성 추가
|
||||
|
||||
build.gradle 파일에 아래와 같이 MariaDB 클라이언트 의존성을 추가한다.
|
||||
|
||||
```bash
|
||||
implementation 'org.mariadb.jdbc:mariadb-java-client:3.0.4'
|
||||
```
|
||||
|
||||
2. application.yml 파일 수정
|
||||
|
||||
우리는 이전 단계에서 MariaDB를 생성하고 mydb라는 데이터베이스를 생성하였다.
|
||||
관리자 계정의 비밀번호도 `root`로 변경해두었다.
|
||||
주문 서비스에서도 같은 정보로 접속할 수 있도록 `application.yml` 파일을 수정한다.
|
||||
|
||||
```yaml
|
||||
# 생략
|
||||
h2:
|
||||
console:
|
||||
enabled: true
|
||||
settings:
|
||||
web-allow-others: true
|
||||
path: /h2-console
|
||||
jpa:
|
||||
hibernate:
|
||||
ddl-auto: update
|
||||
datasource:
|
||||
driver-class-name: org.mariadb.jdbc.Driver
|
||||
url: jdbc:mariadb://localhost:3306/mydb
|
||||
username: root
|
||||
password: root
|
||||
# 생략
|
||||
```
|
||||
|
||||
수정이 완료되었으면 주문 서비스를 재기동한다.
|
||||
|
||||
3. h2-console 접속
|
||||
|
||||
이름은 h2-console이지만 우리는 이제 h2-console을 통해서 MariaDB에 접속할 것이다.
|
||||
아래의 이미지와 같이 설정값을 수정한다. 비밀번호는 각자 설정한 비밀번호를 입력한다.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
### Connect 설치
|
||||
|
||||
필자는 본 문서 작성일 기준 최신 버전인 7.1.1 버전을 설치한다.
|
||||
최신 버전에 대한 정보가 궁금하다면 [여기](https://docs.confluent.io/platform/current/release-notes/index.html) 로 이동하여 정보를 확인한다.
|
||||
|
||||
1. confluent-community 다운로드
|
||||
|
||||
웹 브라우저에 아래의 주소를 입력하여 confluent-community를 다운로드 받는다.
|
||||
|
||||
`https://packages.confluent.io/archive/6.1/confluent-community-7.1.1.tar.gz`
|
||||
|
||||
2. confluent 다운로드 파일 압축 해제
|
||||
|
||||
다운로드한 파일의 압축을 풀어준다.
|
||||
필자는 `Kafka`를 설치한 경로에 같이 압축을 풀었으며 해당 파일은 필자의 [깃 리포지토리](https://github.com/roy-zz/spring-cloud) 에 올려두었다.
|
||||
|
||||

|
||||
|
||||
3. connect 실행
|
||||
|
||||
confluent를 설치하고 압축 해제한 경로로 이동하여 아래의 커맨드를 입력하여 `connect`를 실행시킨다.
|
||||
|
||||
```bash
|
||||
$ ./bin/connect-distributed ./etc/kafka/connect-distributed.properties
|
||||
```
|
||||
|
||||

|
||||
|
||||
4. connect 정상 실행 확인
|
||||
|
||||
`connect`가 정상적으로 실행되었다면 `Kafka` 토픽에 우리가 보지 못했던 정보가 추가되어야 한다.
|
||||
아래의 커맨드를 입력하여 Kafka의 `Topic`목록을 확인해본다.
|
||||
|
||||
```bash
|
||||
$ ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
|
||||
```
|
||||
|
||||
정상적으로 실행되었다면 아래와 같이 `connect` 관련 토픽이 추가되어야 한다.
|
||||
|
||||

|
||||
|
||||
5. JDBC Connector 다운로드
|
||||
|
||||
웹 브라우저에 아래의 주소를 입력하여 최신 kafka-connect-jdbc 파일을 다운로드 받는다.
|
||||
|
||||
`https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc?_ga=2.191287913.2138291622.1650982991-168732119.1650982991&_gac=1.155404745.1650983004.CjwKCAjwsJ6TBhAIEiwAfl4TWMy4ml6lIbqaD-wfx8ub7GcZj_UMeqdqO2G0s-RU4-dARlahsLcySBoC9oAQAvD_BwE`
|
||||
|
||||
6. JDBC Connector 압축 해제
|
||||
|
||||
다운로드한 파일의 압축을 풀어준다.
|
||||
confluent와 동일한 경로에 압축을 풀어주었으며 같은 방식으로 관리된다.
|
||||
|
||||

|
||||
|
||||
7. JDBC lib 경로 지정
|
||||
|
||||
아래의 이미지와 같이 `confluentinc-kafka-connect-jdbc` 경로의 `lib` 디렉토리 경로를 확인한다.
|
||||
|
||||

|
||||
|
||||
{confluent 설치경로}/etc/kafka/ 경로의 connect-distributed.properties 파일을 아래와 같이 수정한다.
|
||||
|
||||

|
||||
|
||||
8. MariaDB Java Client 다운로드
|
||||
|
||||
브라우저에서 아래의 경로로 이동하여 MariaDB Java Client jar파일을 다운로드 받는다.
|
||||
|
||||
`https://mvnrepository.com/artifact/org.mariadb.jdbc/mariadb-java-client/3.0.4`
|
||||
|
||||
다운로드한 jar 파일을 아래의 이미지와 같이 ${Confluent 설치경로}/share/java/kafka 경로로 이동시킨다.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
지금까지 Kafka의 Connect를 사용하기 위한 모든 준비가 완료되었다.
|
||||
다음 장에서는 본격적으로 `Source Connect`와 `Sink Connect`를 사용하는 방법에 대해서 알아본다.
|
||||
|
||||
---
|
||||
|
||||
**참고한 강의:**
|
||||
|
||||
- https://www.inflearn.com/course/%EC%8A%A4%ED%94%84%EB%A7%81-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C-%EB%A7%88%EC%9D%B4%ED%81%AC%EB%A1%9C%EC%84%9C%EB%B9%84%EC%8A%A4
|
||||
BIN
document/kafka/connect_image/add-confluent.png
Normal file
|
After Width: | Height: | Size: 64 KiB |
BIN
document/kafka/connect_image/add-kafka-connect-jdbc.png
Normal file
|
After Width: | Height: | Size: 77 KiB |
BIN
document/kafka/connect_image/add-mariadb-java-client.png
Normal file
|
After Width: | Height: | Size: 27 KiB |
BIN
document/kafka/connect_image/add-topic-for-connect.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
document/kafka/connect_image/connector-flow.png
Normal file
|
After Width: | Height: | Size: 642 KiB |
BIN
document/kafka/connect_image/copy-lib-directory.png
Normal file
|
After Width: | Height: | Size: 112 KiB |
BIN
document/kafka/connect_image/h2-console-set-mariadb.png
Normal file
|
After Width: | Height: | Size: 139 KiB |
BIN
document/kafka/connect_image/installed-mariadb.png
Normal file
|
After Width: | Height: | Size: 121 KiB |
BIN
document/kafka/connect_image/modify-plugin-path.png
Normal file
|
After Width: | Height: | Size: 103 KiB |
BIN
document/kafka/connect_image/renew-mariadb-password.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
document/kafka/connect_image/start-connect.png
Normal file
|
After Width: | Height: | Size: 200 KiB |
BIN
document/kafka/connect_image/started-mariadb.png
Normal file
|
After Width: | Height: | Size: 12 KiB |
127
document/kafka/sink_source.md
Normal file
@@ -0,0 +1,127 @@
|
||||
[이전 장(링크)](https://imprint.tistory.com/233) 에서는 `Kafka의 Connect`를 설치하는 방법에 대해서 알아보았다.
|
||||
이번 장에서는 `Source Connect`와 `Sink Connect`를 사용하는 방법에 대해서 알아본다.
|
||||
모든 소스 코드는 [깃 허브 (링크)](https://github.com/roy-zz/spring-cloud) 에 올려두었다.
|
||||
|
||||
---
|
||||
|
||||
우리는 MariaDB의 `Connect Source`와 `Connect Sink`를 사용하여 아래와 같은 구조를 구축할 것이다.
|
||||
|
||||

|
||||
|
||||
`Connect Source`는 데이터를 제공하는 쪽으로부터 데이터를 전달받아 Kafka Cluster로 전달하는 역할을 하고 `Connect Sink`는 Kafka Cluster를 대상 저장소에 전달하는 역할을 한다.
|
||||
이전에 Producer와 Consumer를 사용하여 Kafka Cluster를 통해서 데이터를 제공하고 소비하는 작업을 진행했었다.
|
||||
이번에는 Connect Source와 Connect Sink를 사용하여 데이터를 제공하고 소비하는 작업을 진행해본다.
|
||||
|
||||
### Source Connect
|
||||
|
||||
우리는 Postman을 사용하여 `connect`의 connectors API를 호출하여 새로운 정보를 등록할 것이다.
|
||||
|
||||
Postman에 전달하는 데이터 중에서 `table.whitelist`라는 항목에 우리가 원하는 테이블의 이름을 지정한다.
|
||||
`connect`는 평소에 대기하고 있다가 우리가 지정한 테이블에 새로운 데이터가 입력되면 새로 입력된 데이터를 가져오는 역할을 한다.
|
||||
`topic.prefix`는 `table.whitelist`에서 지정한 테이블에 새로운 데이터가 들어왔을 때 데이터를 전달하기 위해 사용되는 토픽의 Prefix이며 데이터가 전달되는 토픽의 이름은 우리가 지정한 테이블의 이름과 합쳐져서 `roy_topic_users`가 될 것이다.
|
||||
결국 `Source Connect`는 우리가 지정한 테이블의 변경을 감지하고 있다가 변경이 감지되면 우리가 지정한 토픽으로 데이터를 전달해주는 역할을 한다.
|
||||
|
||||
1. POST /connectors API 호출
|
||||
|
||||
아래의 JSON 파일을 HTTP Body에 넣어서 `connect`의 /connectors API를 호출한다.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "my-source-connect",
|
||||
"config": {
|
||||
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
|
||||
"connection.url": "jdbc:mysql://localhost:3306/mydb",
|
||||
"connection.user": "root",
|
||||
"connection.password": "root",
|
||||
"mode": "incrementing",
|
||||
"incrementing.column.name": "id",
|
||||
"table.whitelist": "users",
|
||||
"topic.prefix": "roy_topic_",
|
||||
"tasks.max" : "1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
아래와 같이 201 코드가 반환된다면 정상적으로 `Source Connect`가 등록된 것이다.
|
||||
혹시라도 500 오류가 발생한다면 이전 장에서 설정을 완료하고 나서 Connect를 재실행시키지 않아서 발생하는 오류일 수 있으므로 Connect를 재실행시켜본다.
|
||||
|
||||

|
||||
|
||||
**참고**
|
||||
|
||||
GET http://localhost:8083/connectors 를 호출하면 등록되어 있는 connect 목록을 확인할 수 있다.
|
||||
|
||||

|
||||
|
||||
GET http://localhost:8083/connectors/my-source-connect 를 호출하면 connect의 상세정보를 확인할 수 있다.
|
||||
|
||||

|
||||
|
||||
2. Consumer 등록
|
||||
|
||||
`Source Connect`가 produce하는 데이터를 확인하는 Consumer를 실행시킨다.
|
||||
|
||||
```bash
|
||||
$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic roy_topic_users --from-beginning
|
||||
```
|
||||
|
||||

|
||||
|
||||
3. 사용자 정보 Insert
|
||||
|
||||
새로운 사용자 정보가 Insert 되었을 때 우리가 예상한 데이터가 전달되는지 확인하기 위해 MariaDB 콘솔로 접속하여 새로운 사용자를 등록해본다.
|
||||
|
||||
```sql
|
||||
INSERT INTO users (user_id, password, name) VALUES ('roy', 'password', 'roy choi');
|
||||
```
|
||||
|
||||
Consumer가 정상적으로 데이터를 수신하고 있다.
|
||||
좌측 창이 Consumer이며 우측 창에서 데이터를 INSERT하고 있다.
|
||||
|
||||

|
||||
|
||||
|
||||
**참고**
|
||||
|
||||
1, 2, 3 단계를 진행하여도 Consumer가 데이터를 수신하지 못하는 경우가 발생하였다.
|
||||
필자의 경우 아래와 같은 방법으로 해결하였으니 혹시라도 같은 문제가 발생한다면 참고하도록 한다.
|
||||
|
||||
문제상황: Consumer에서 데이터를 수신하지 못하며 connect 로그에서 SQL Exception이 발생하는 상황.
|
||||
해결방법:
|
||||
- 아래의 이미지와 같이 mariadb-connector.jar 파일을 추가한 디렉토리에 myslq-connector.jar파일을 추가한다. (mysql.jar 파일은 [여기](https://dev.mysql.com/downloads/file/?id=510648)에서 다운로드 한다.)
|
||||
|
||||

|
||||
|
||||
- connect의 API를 호출하여 `Source Connect`를 추가할 때 접속주소를 mariadb가 아닌 mysql로 수정한다.
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "roy-source-connect",
|
||||
"config": {
|
||||
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
|
||||
"connection.url": "jdbc:mysql://localhost:3306/mydb",
|
||||
"connection.user": "root",
|
||||
"connection.password": "root",
|
||||
"mode": "incrementing",
|
||||
"incrementing.column.name": "id",
|
||||
"table.whitelist": "users",
|
||||
"topic.prefix": "roy_topic_",
|
||||
"tasks.max" : "1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
아마 문제가 발생하였다면 같은 방식으로 해결될 것으로 예상된다.
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
**참고한 강의:**
|
||||
|
||||
- https://www.inflearn.com/course/%EC%8A%A4%ED%94%84%EB%A7%81-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C-%EB%A7%88%EC%9D%B4%ED%81%AC%EB%A1%9C%EC%84%9C%EB%B9%84%EC%8A%A4
|
||||
BIN
document/kafka/sink_source_image/add-mysql-connector.png
Normal file
|
After Width: | Height: | Size: 14 KiB |
BIN
document/kafka/sink_source_image/connector-flow.png
Normal file
|
After Width: | Height: | Size: 642 KiB |
BIN
document/kafka/sink_source_image/get-connectors-api.png
Normal file
|
After Width: | Height: | Size: 45 KiB |
BIN
document/kafka/sink_source_image/get-connectors-detail-api.png
Normal file
|
After Width: | Height: | Size: 100 KiB |
|
After Width: | Height: | Size: 54 KiB |
BIN
document/kafka/sink_source_image/post-connectors-api.png
Normal file
|
After Width: | Height: | Size: 139 KiB |
BIN
document/kafka/sink_source_image/register-consumer.png
Normal file
|
After Width: | Height: | Size: 24 KiB |
BIN
thirdparty/kafka_2.13-3.1.0.tgz
vendored
321
thirdparty/kafka_2.13-3.1.0/LICENSE
vendored
@@ -1,321 +0,0 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
This project bundles some components that are also licensed under the Apache
|
||||
License Version 2.0:
|
||||
|
||||
audience-annotations-0.5.0
|
||||
commons-cli-1.4
|
||||
commons-lang3-3.8.1
|
||||
jackson-annotations-2.12.3
|
||||
jackson-core-2.12.3
|
||||
jackson-databind-2.12.3
|
||||
jackson-dataformat-csv-2.12.3
|
||||
jackson-datatype-jdk8-2.12.3
|
||||
jackson-jaxrs-base-2.12.3
|
||||
jackson-jaxrs-json-provider-2.12.3
|
||||
jackson-module-jaxb-annotations-2.12.3
|
||||
jackson-module-paranamer-2.10.5
|
||||
jackson-module-scala_2.13-2.12.3
|
||||
jakarta.validation-api-2.0.2
|
||||
javassist-3.27.0-GA
|
||||
jetty-client-9.4.43.v20210629
|
||||
jetty-continuation-9.4.43.v20210629
|
||||
jetty-http-9.4.43.v20210629
|
||||
jetty-io-9.4.43.v20210629
|
||||
jetty-security-9.4.43.v20210629
|
||||
jetty-server-9.4.43.v20210629
|
||||
jetty-servlet-9.4.43.v20210629
|
||||
jetty-servlets-9.4.43.v20210629
|
||||
jetty-util-9.4.43.v20210629
|
||||
jetty-util-ajax-9.4.43.v20210629
|
||||
jersey-common-2.34
|
||||
jersey-server-2.34
|
||||
jose4j-0.7.8
|
||||
log4j-1.2.17
|
||||
lz4-java-1.8.0
|
||||
maven-artifact-3.8.1
|
||||
metrics-core-4.1.12.1
|
||||
netty-buffer-4.1.68.Final
|
||||
netty-codec-4.1.68.Final
|
||||
netty-common-4.1.68.Final
|
||||
netty-handler-4.1.68.Final
|
||||
netty-resolver-4.1.68.Final
|
||||
netty-transport-4.1.68.Final
|
||||
netty-transport-native-epoll-4.1.68.Final
|
||||
netty-transport-native-unix-common-4.1.68.Final
|
||||
plexus-utils-3.2.1
|
||||
rocksdbjni-6.22.1.1
|
||||
scala-collection-compat_2.13-2.4.4
|
||||
scala-library-2.13.6
|
||||
scala-logging_2.13-3.9.3
|
||||
scala-reflect-2.13.6
|
||||
scala-java8-compat_2.13-1.0.0
|
||||
snappy-java-1.1.8.4
|
||||
zookeeper-3.6.3
|
||||
zookeeper-jute-3.6.3
|
||||
|
||||
===============================================================================
|
||||
This product bundles various third-party components under other open source
|
||||
licenses. This section summarizes those components and their licenses.
|
||||
See licenses/ for text of these licenses.
|
||||
|
||||
---------------------------------------
|
||||
Eclipse Distribution License - v 1.0
|
||||
see: licenses/eclipse-distribution-license-1.0
|
||||
|
||||
jakarta.activation-api-1.2.1
|
||||
jakarta.xml.bind-api-2.3.2
|
||||
|
||||
---------------------------------------
|
||||
Eclipse Public License - v 2.0
|
||||
see: licenses/eclipse-public-license-2.0
|
||||
|
||||
jakarta.annotation-api-1.3.5
|
||||
jakarta.ws.rs-api-2.1.6
|
||||
javax.ws.rs-api-2.1.1
|
||||
hk2-api-2.6.1
|
||||
hk2-locator-2.6.1
|
||||
hk2-utils-2.6.1
|
||||
osgi-resource-locator-1.0.3
|
||||
aopalliance-repackaged-2.6.1
|
||||
jakarta.inject-2.6.1
|
||||
jersey-container-servlet-2.34
|
||||
jersey-container-servlet-core-2.34
|
||||
jersey-client-2.34
|
||||
jersey-hk2-2.34
|
||||
jersey-media-jaxb-2.31
|
||||
|
||||
---------------------------------------
|
||||
CDDL 1.1 + GPLv2 with classpath exception
|
||||
see: licenses/CDDL+GPL-1.1
|
||||
|
||||
javax.servlet-api-3.1.0
|
||||
jaxb-api-2.3.0
|
||||
activation-1.1.1
|
||||
|
||||
---------------------------------------
|
||||
MIT License
|
||||
|
||||
argparse4j-0.7.0, see: licenses/argparse-MIT
|
||||
jopt-simple-5.0.4, see: licenses/jopt-simple-MIT
|
||||
slf4j-api-1.7.30, see: licenses/slf4j-MIT
|
||||
slf4j-log4j12-1.7.30, see: licenses/slf4j-MIT
|
||||
|
||||
---------------------------------------
|
||||
BSD 2-Clause
|
||||
|
||||
zstd-jni-1.5.0-4 see: licenses/zstd-jni-BSD-2-clause
|
||||
|
||||
---------------------------------------
|
||||
BSD 3-Clause
|
||||
|
||||
jline-3.12.1, see: licenses/jline-BSD-3-clause
|
||||
paranamer-2.8, see: licenses/paranamer-BSD-3-clause
|
||||
|
||||
---------------------------------------
|
||||
Do What The F*ck You Want To Public License
|
||||
see: licenses/DWTFYWTPL
|
||||
|
||||
reflections-0.9.12
|
||||
856
thirdparty/kafka_2.13-3.1.0/NOTICE
vendored
@@ -1,856 +0,0 @@
|
||||
Apache Kafka
|
||||
Copyright 2021 The Apache Software Foundation.
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (https://www.apache.org/).
|
||||
|
||||
This distribution has a binary dependency on jersey, which is available under the CDDL
|
||||
License. The source code of jersey can be found at https://github.com/jersey/jersey/.
|
||||
|
||||
This distribution has a binary test dependency on jqwik, which is available under
|
||||
the Eclipse Public License 2.0. The source code can be found at
|
||||
https://github.com/jlink/jqwik.
|
||||
|
||||
The streams-scala (streams/streams-scala) module was donated by Lightbend and the original code was copyrighted by them:
|
||||
Copyright (C) 2018 Lightbend Inc. <https://www.lightbend.com>
|
||||
Copyright (C) 2017-2018 Alexis Seigneurin.
|
||||
|
||||
This project contains the following code copied from Apache Hadoop:
|
||||
clients/src/main/java/org/apache/kafka/common/utils/PureJavaCrc32C.java
|
||||
Some portions of this file Copyright (c) 2004-2006 Intel Corporation and licensed under the BSD license.
|
||||
|
||||
This project contains the following code copied from Apache Hive:
|
||||
streams/src/main/java/org/apache/kafka/streams/state/internals/Murmur3.java
|
||||
|
||||
// ------------------------------------------------------------------
|
||||
// NOTICE file corresponding to the section 4d of The Apache License,
|
||||
// Version 2.0, in this case for
|
||||
// ------------------------------------------------------------------
|
||||
|
||||
# Notices for Eclipse GlassFish
|
||||
|
||||
This content is produced and maintained by the Eclipse GlassFish project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.glassfish
|
||||
|
||||
## Trademarks
|
||||
|
||||
Eclipse GlassFish, and GlassFish are trademarks of the Eclipse Foundation.
|
||||
|
||||
## Copyright
|
||||
|
||||
All content is the property of the respective authors or their employers. For
|
||||
more information regarding authorship of content, please consult the listed
|
||||
source code repository logs.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Public License v. 2.0 which is available at
|
||||
http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made
|
||||
available under the following Secondary Licenses when the conditions for such
|
||||
availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU
|
||||
General Public License, version 2 with the GNU Classpath Exception which is
|
||||
available at https://www.gnu.org/software/classpath/license.html.
|
||||
|
||||
SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0
|
||||
|
||||
## Source Code
|
||||
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/glassfish-ha-api
|
||||
* https://github.com/eclipse-ee4j/glassfish-logging-annotation-processor
|
||||
* https://github.com/eclipse-ee4j/glassfish-shoal
|
||||
* https://github.com/eclipse-ee4j/glassfish-cdi-porting-tck
|
||||
* https://github.com/eclipse-ee4j/glassfish-jsftemplating
|
||||
* https://github.com/eclipse-ee4j/glassfish-hk2-extra
|
||||
* https://github.com/eclipse-ee4j/glassfish-hk2
|
||||
* https://github.com/eclipse-ee4j/glassfish-fighterfish
|
||||
|
||||
## Third-party Content
|
||||
|
||||
This project leverages the following third party content.
|
||||
|
||||
None
|
||||
|
||||
## Cryptography
|
||||
|
||||
Content may contain encryption software. The country in which you are currently
|
||||
may have restrictions on the import, possession, and use, and/or re-export to
|
||||
another country, of encryption software. BEFORE using any encryption software,
|
||||
please check the country's laws, regulations and policies concerning the import,
|
||||
possession, or use, and re-export of encryption software, to see if this is
|
||||
permitted.
|
||||
|
||||
|
||||
Apache Yetus - Audience Annotations
|
||||
Copyright 2015-2017 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
Apache Commons CLI
|
||||
Copyright 2001-2017 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
Apache Commons Lang
|
||||
Copyright 2001-2018 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
# Jackson JSON processor
|
||||
|
||||
Jackson is a high-performance, Free/Open Source JSON processing library.
|
||||
It was originally written by Tatu Saloranta (tatu.saloranta@iki.fi), and has
|
||||
been in development since 2007.
|
||||
It is currently developed by a community of developers, as well as supported
|
||||
commercially by FasterXML.com.
|
||||
|
||||
## Licensing
|
||||
|
||||
Jackson core and extension components may licensed under different licenses.
|
||||
To find the details that apply to this artifact see the accompanying LICENSE file.
|
||||
For more information, including possible other licensing options, contact
|
||||
FasterXML.com (http://fasterxml.com).
|
||||
|
||||
## Credits
|
||||
|
||||
A list of contributors may be found from CREDITS file, which is included
|
||||
in some artifacts (usually source distributions); but is always available
|
||||
from the source code management (SCM) system project uses.
|
||||
|
||||
|
||||
# Notices for Eclipse Project for JAF
|
||||
|
||||
This content is produced and maintained by the Eclipse Project for JAF project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.jaf
|
||||
|
||||
## Copyright
|
||||
|
||||
All content is the property of the respective authors or their employers. For
|
||||
more information regarding authorship of content, please consult the listed
|
||||
source code repository logs.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Distribution License v. 1.0,
|
||||
which is available at http://www.eclipse.org/org/documents/edl-v10.php.
|
||||
|
||||
SPDX-License-Identifier: BSD-3-Clause
|
||||
|
||||
## Source Code
|
||||
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/jaf
|
||||
|
||||
## Third-party Content
|
||||
|
||||
This project leverages the following third party content.
|
||||
|
||||
JUnit (4.12)
|
||||
|
||||
* License: Eclipse Public License
|
||||
|
||||
|
||||
# Notices for Jakarta Annotations
|
||||
|
||||
This content is produced and maintained by the Jakarta Annotations project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.ca
|
||||
|
||||
## Trademarks
|
||||
|
||||
Jakarta Annotations is a trademark of the Eclipse Foundation.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Public License v. 2.0 which is available at
|
||||
http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made
|
||||
available under the following Secondary Licenses when the conditions for such
|
||||
availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU
|
||||
General Public License, version 2 with the GNU Classpath Exception which is
|
||||
available at https://www.gnu.org/software/classpath/license.html.
|
||||
|
||||
SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0
|
||||
|
||||
## Source Code
|
||||
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/common-annotations-api
|
||||
|
||||
## Third-party Content
|
||||
|
||||
## Cryptography
|
||||
|
||||
Content may contain encryption software. The country in which you are currently
|
||||
may have restrictions on the import, possession, and use, and/or re-export to
|
||||
another country, of encryption software. BEFORE using any encryption software,
|
||||
please check the country's laws, regulations and policies concerning the import,
|
||||
possession, or use, and re-export of encryption software, to see if this is
|
||||
permitted.
|
||||
|
||||
|
||||
# Notices for the Jakarta RESTful Web Services Project
|
||||
|
||||
This content is produced and maintained by the **Jakarta RESTful Web Services**
|
||||
project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.jaxrs
|
||||
|
||||
## Trademarks
|
||||
|
||||
**Jakarta RESTful Web Services** is a trademark of the Eclipse Foundation.
|
||||
|
||||
## Copyright
|
||||
|
||||
All content is the property of the respective authors or their employers. For
|
||||
more information regarding authorship of content, please consult the listed
|
||||
source code repository logs.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Public License v. 2.0 which is available at
|
||||
http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made
|
||||
available under the following Secondary Licenses when the conditions for such
|
||||
availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU
|
||||
General Public License, version 2 with the GNU Classpath Exception which is
|
||||
available at https://www.gnu.org/software/classpath/license.html.
|
||||
|
||||
SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0
|
||||
|
||||
## Source Code
|
||||
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/jaxrs-api
|
||||
|
||||
## Third-party Content
|
||||
|
||||
This project leverages the following third party content.
|
||||
|
||||
javaee-api (7.0)
|
||||
|
||||
* License: Apache-2.0 AND W3C
|
||||
|
||||
JUnit (4.11)
|
||||
|
||||
* License: Common Public License 1.0
|
||||
|
||||
Mockito (2.16.0)
|
||||
|
||||
* Project: http://site.mockito.org
|
||||
* Source: https://github.com/mockito/mockito/releases/tag/v2.16.0
|
||||
|
||||
## Cryptography
|
||||
|
||||
Content may contain encryption software. The country in which you are currently
|
||||
may have restrictions on the import, possession, and use, and/or re-export to
|
||||
another country, of encryption software. BEFORE using any encryption software,
|
||||
please check the country's laws, regulations and policies concerning the import,
|
||||
possession, or use, and re-export of encryption software, to see if this is
|
||||
permitted.
|
||||
|
||||
|
||||
# Notices for Eclipse Project for JAXB
|
||||
|
||||
This content is produced and maintained by the Eclipse Project for JAXB project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.jaxb
|
||||
|
||||
## Trademarks
|
||||
|
||||
Eclipse Project for JAXB is a trademark of the Eclipse Foundation.
|
||||
|
||||
## Copyright
|
||||
|
||||
All content is the property of the respective authors or their employers. For
|
||||
more information regarding authorship of content, please consult the listed
|
||||
source code repository logs.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Distribution License v. 1.0 which is available
|
||||
at http://www.eclipse.org/org/documents/edl-v10.php.
|
||||
|
||||
SPDX-License-Identifier: BSD-3-Clause
|
||||
|
||||
## Source Code
|
||||
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/jaxb-api
|
||||
|
||||
## Third-party Content
|
||||
|
||||
This project leverages the following third party content.
|
||||
|
||||
None
|
||||
|
||||
## Cryptography
|
||||
|
||||
Content may contain encryption software. The country in which you are currently
|
||||
may have restrictions on the import, possession, and use, and/or re-export to
|
||||
another country, of encryption software. BEFORE using any encryption software,
|
||||
please check the country's laws, regulations and policies concerning the import,
|
||||
possession, or use, and re-export of encryption software, to see if this is
|
||||
permitted.
|
||||
|
||||
|
||||
# Notice for Jersey
|
||||
This content is produced and maintained by the Eclipse Jersey project.
|
||||
|
||||
* Project home: https://projects.eclipse.org/projects/ee4j.jersey
|
||||
|
||||
## Trademarks
|
||||
Eclipse Jersey is a trademark of the Eclipse Foundation.
|
||||
|
||||
## Copyright
|
||||
|
||||
All content is the property of the respective authors or their employers. For
|
||||
more information regarding authorship of content, please consult the listed
|
||||
source code repository logs.
|
||||
|
||||
## Declared Project Licenses
|
||||
|
||||
This program and the accompanying materials are made available under the terms
|
||||
of the Eclipse Public License v. 2.0 which is available at
|
||||
http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made
|
||||
available under the following Secondary Licenses when the conditions for such
|
||||
availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU
|
||||
General Public License, version 2 with the GNU Classpath Exception which is
|
||||
available at https://www.gnu.org/software/classpath/license.html.
|
||||
|
||||
SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0
|
||||
|
||||
## Source Code
|
||||
The project maintains the following source code repositories:
|
||||
|
||||
* https://github.com/eclipse-ee4j/jersey
|
||||
|
||||
## Third-party Content
|
||||
|
||||
Angular JS, v1.6.6
|
||||
* License MIT (http://www.opensource.org/licenses/mit-license.php)
|
||||
* Project: http://angularjs.org
|
||||
* Coyright: (c) 2010-2017 Google, Inc.
|
||||
|
||||
aopalliance Version 1
|
||||
* License: all the source code provided by AOP Alliance is Public Domain.
|
||||
* Project: http://aopalliance.sourceforge.net
|
||||
* Copyright: Material in the public domain is not protected by copyright
|
||||
|
||||
Bean Validation API 2.0.2
|
||||
* License: Apache License, 2.0
|
||||
* Project: http://beanvalidation.org/1.1/
|
||||
* Copyright: 2009, Red Hat, Inc. and/or its affiliates, and individual contributors
|
||||
* by the @authors tag.
|
||||
|
||||
Hibernate Validator CDI, 6.1.2.Final
|
||||
* License: Apache License, 2.0
|
||||
* Project: https://beanvalidation.org/
|
||||
* Repackaged in org.glassfish.jersey.server.validation.internal.hibernate
|
||||
|
||||
Bootstrap v3.3.7
|
||||
* License: MIT license (https://github.com/twbs/bootstrap/blob/master/LICENSE)
|
||||
* Project: http://getbootstrap.com
|
||||
* Copyright: 2011-2016 Twitter, Inc
|
||||
|
||||
Google Guava Version 18.0
|
||||
* License: Apache License, 2.0
|
||||
* Copyright (C) 2009 The Guava Authors
|
||||
|
||||
javax.inject Version: 1
|
||||
* License: Apache License, 2.0
|
||||
* Copyright (C) 2009 The JSR-330 Expert Group
|
||||
|
||||
Javassist Version 3.25.0-GA
|
||||
* License: Apache License, 2.0
|
||||
* Project: http://www.javassist.org/
|
||||
* Copyright (C) 1999- Shigeru Chiba. All Rights Reserved.
|
||||
|
||||
Jackson JAX-RS Providers Version 2.10.1
|
||||
* License: Apache License, 2.0
|
||||
* Project: https://github.com/FasterXML/jackson-jaxrs-providers
|
||||
* Copyright: (c) 2009-2011 FasterXML, LLC. All rights reserved unless otherwise indicated.
|
||||
|
||||
jQuery v1.12.4
|
||||
* License: jquery.org/license
|
||||
* Project: jquery.org
|
||||
* Copyright: (c) jQuery Foundation
|
||||
|
||||
jQuery Barcode plugin 0.3
|
||||
* License: MIT & GPL (http://www.opensource.org/licenses/mit-license.php & http://www.gnu.org/licenses/gpl.html)
|
||||
* Project: http://www.pasella.it/projects/jQuery/barcode
|
||||
* Copyright: (c) 2009 Antonello Pasella antonello.pasella@gmail.com
|
||||
|
||||
JSR-166 Extension - JEP 266
|
||||
* License: CC0
|
||||
* No copyright
|
||||
* Written by Doug Lea with assistance from members of JCP JSR-166 Expert Group and released to the public domain, as explained at http://creativecommons.org/publicdomain/zero/1.0/
|
||||
|
||||
KineticJS, v4.7.1
|
||||
* License: MIT license (http://www.opensource.org/licenses/mit-license.php)
|
||||
* Project: http://www.kineticjs.com, https://github.com/ericdrowell/KineticJS
|
||||
* Copyright: Eric Rowell
|
||||
|
||||
org.objectweb.asm Version 8.0
|
||||
* License: Modified BSD (http://asm.objectweb.org/license.html)
|
||||
* Copyright (c) 2000-2011 INRIA, France Telecom. All rights reserved.
|
||||
|
||||
org.osgi.core version 6.0.0
|
||||
* License: Apache License, 2.0
|
||||
* Copyright (c) OSGi Alliance (2005, 2008). All Rights Reserved.
|
||||
|
||||
org.glassfish.jersey.server.internal.monitoring.core
|
||||
* License: Apache License, 2.0
|
||||
* Copyright (c) 2015-2018 Oracle and/or its affiliates. All rights reserved.
|
||||
* Copyright 2010-2013 Coda Hale and Yammer, Inc.
|
||||
|
||||
W3.org documents
|
||||
* License: W3C License
|
||||
* Copyright: Copyright (c) 1994-2001 World Wide Web Consortium, (Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University). All Rights Reserved. http://www.w3.org/Consortium/Legal/
|
||||
|
||||
|
||||
==============================================================
|
||||
Jetty Web Container
|
||||
Copyright 1995-2018 Mort Bay Consulting Pty Ltd.
|
||||
==============================================================
|
||||
|
||||
The Jetty Web Container is Copyright Mort Bay Consulting Pty Ltd
|
||||
unless otherwise noted.
|
||||
|
||||
Jetty is dual licensed under both
|
||||
|
||||
* The Apache 2.0 License
|
||||
http://www.apache.org/licenses/LICENSE-2.0.html
|
||||
|
||||
and
|
||||
|
||||
* The Eclipse Public 1.0 License
|
||||
http://www.eclipse.org/legal/epl-v10.html
|
||||
|
||||
Jetty may be distributed under either license.
|
||||
|
||||
------
|
||||
Eclipse
|
||||
|
||||
The following artifacts are EPL.
|
||||
* org.eclipse.jetty.orbit:org.eclipse.jdt.core
|
||||
|
||||
The following artifacts are EPL and ASL2.
|
||||
* org.eclipse.jetty.orbit:javax.security.auth.message
|
||||
|
||||
|
||||
The following artifacts are EPL and CDDL 1.0.
|
||||
* org.eclipse.jetty.orbit:javax.mail.glassfish
|
||||
|
||||
|
||||
------
|
||||
Oracle
|
||||
|
||||
The following artifacts are CDDL + GPLv2 with classpath exception.
|
||||
https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html
|
||||
|
||||
* javax.servlet:javax.servlet-api
|
||||
* javax.annotation:javax.annotation-api
|
||||
* javax.transaction:javax.transaction-api
|
||||
* javax.websocket:javax.websocket-api
|
||||
|
||||
------
|
||||
Oracle OpenJDK
|
||||
|
||||
If ALPN is used to negotiate HTTP/2 connections, then the following
|
||||
artifacts may be included in the distribution or downloaded when ALPN
|
||||
module is selected.
|
||||
|
||||
* java.sun.security.ssl
|
||||
|
||||
These artifacts replace/modify OpenJDK classes. The modififications
|
||||
are hosted at github and both modified and original are under GPL v2 with
|
||||
classpath exceptions.
|
||||
http://openjdk.java.net/legal/gplv2+ce.html
|
||||
|
||||
|
||||
------
|
||||
OW2
|
||||
|
||||
The following artifacts are licensed by the OW2 Foundation according to the
|
||||
terms of http://asm.ow2.org/license.html
|
||||
|
||||
org.ow2.asm:asm-commons
|
||||
org.ow2.asm:asm
|
||||
|
||||
|
||||
------
|
||||
Apache
|
||||
|
||||
The following artifacts are ASL2 licensed.
|
||||
|
||||
org.apache.taglibs:taglibs-standard-spec
|
||||
org.apache.taglibs:taglibs-standard-impl
|
||||
|
||||
|
||||
------
|
||||
MortBay
|
||||
|
||||
The following artifacts are ASL2 licensed. Based on selected classes from
|
||||
following Apache Tomcat jars, all ASL2 licensed.
|
||||
|
||||
org.mortbay.jasper:apache-jsp
|
||||
org.apache.tomcat:tomcat-jasper
|
||||
org.apache.tomcat:tomcat-juli
|
||||
org.apache.tomcat:tomcat-jsp-api
|
||||
org.apache.tomcat:tomcat-el-api
|
||||
org.apache.tomcat:tomcat-jasper-el
|
||||
org.apache.tomcat:tomcat-api
|
||||
org.apache.tomcat:tomcat-util-scan
|
||||
org.apache.tomcat:tomcat-util
|
||||
|
||||
org.mortbay.jasper:apache-el
|
||||
org.apache.tomcat:tomcat-jasper-el
|
||||
org.apache.tomcat:tomcat-el-api
|
||||
|
||||
|
||||
------
|
||||
Mortbay
|
||||
|
||||
The following artifacts are CDDL + GPLv2 with classpath exception.
|
||||
|
||||
https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html
|
||||
|
||||
org.eclipse.jetty.toolchain:jetty-schemas
|
||||
|
||||
------
|
||||
Assorted
|
||||
|
||||
The UnixCrypt.java code implements the one way cryptography used by
|
||||
Unix systems for simple password protection. Copyright 1996 Aki Yoshida,
|
||||
modified April 2001 by Iris Van den Broeke, Daniel Deville.
|
||||
Permission to use, copy, modify and distribute UnixCrypt
|
||||
for non-commercial or commercial purposes and without fee is
|
||||
granted provided that the copyright notice appears in all copies.
|
||||
|
||||
|
||||
Apache log4j
|
||||
Copyright 2007 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
Maven Artifact
|
||||
Copyright 2001-2019 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
This product includes software developed by the Indiana University
|
||||
Extreme! Lab (http://www.extreme.indiana.edu/).
|
||||
|
||||
This product includes software developed by
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
This product includes software developed by
|
||||
ThoughtWorks (http://www.thoughtworks.com).
|
||||
|
||||
This product includes software developed by
|
||||
javolution (http://javolution.org/).
|
||||
|
||||
This product includes software developed by
|
||||
Rome (https://rome.dev.java.net/).
|
||||
|
||||
|
||||
Scala
|
||||
Copyright (c) 2002-2020 EPFL
|
||||
Copyright (c) 2011-2020 Lightbend, Inc.
|
||||
|
||||
Scala includes software developed at
|
||||
LAMP/EPFL (https://lamp.epfl.ch/) and
|
||||
Lightbend, Inc. (https://www.lightbend.com/).
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License").
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
||||
This software includes projects with other licenses -- see `doc/LICENSE.md`.
|
||||
|
||||
|
||||
Apache ZooKeeper - Server
|
||||
Copyright 2008-2021 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
Apache ZooKeeper - Jute
|
||||
Copyright 2008-2021 The Apache Software Foundation
|
||||
|
||||
This product includes software developed at
|
||||
The Apache Software Foundation (http://www.apache.org/).
|
||||
|
||||
|
||||
The Netty Project
|
||||
=================
|
||||
|
||||
Please visit the Netty web site for more information:
|
||||
|
||||
* https://netty.io/
|
||||
|
||||
Copyright 2014 The Netty Project
|
||||
|
||||
The Netty Project licenses this file to you under the Apache License,
|
||||
version 2.0 (the "License"); you may not use this file except in compliance
|
||||
with the License. You may obtain a copy of the License at:
|
||||
|
||||
https://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
Also, please refer to each LICENSE.<component>.txt file, which is located in
|
||||
the 'license' directory of the distribution file, for the license terms of the
|
||||
components that this product depends on.
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
This product contains the extensions to Java Collections Framework which has
|
||||
been derived from the works by JSR-166 EG, Doug Lea, and Jason T. Greene:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jsr166y.txt (Public Domain)
|
||||
* HOMEPAGE:
|
||||
* http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/
|
||||
* http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbosscache/experimental/jsr166/
|
||||
|
||||
This product contains a modified version of Robert Harder's Public Domain
|
||||
Base64 Encoder and Decoder, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.base64.txt (Public Domain)
|
||||
* HOMEPAGE:
|
||||
* http://iharder.sourceforge.net/current/java/base64/
|
||||
|
||||
This product contains a modified portion of 'Webbit', an event based
|
||||
WebSocket and HTTP server, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.webbit.txt (BSD License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/joewalnes/webbit
|
||||
|
||||
This product contains a modified portion of 'SLF4J', a simple logging
|
||||
facade for Java, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.slf4j.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://www.slf4j.org/
|
||||
|
||||
This product contains a modified portion of 'Apache Harmony', an open source
|
||||
Java SE, which can be obtained at:
|
||||
|
||||
* NOTICE:
|
||||
* license/NOTICE.harmony.txt
|
||||
* LICENSE:
|
||||
* license/LICENSE.harmony.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://archive.apache.org/dist/harmony/
|
||||
|
||||
This product contains a modified portion of 'jbzip2', a Java bzip2 compression
|
||||
and decompression library written by Matthew J. Francis. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jbzip2.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://code.google.com/p/jbzip2/
|
||||
|
||||
This product contains a modified portion of 'libdivsufsort', a C API library to construct
|
||||
the suffix array and the Burrows-Wheeler transformed string for any input string of
|
||||
a constant-size alphabet written by Yuta Mori. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.libdivsufsort.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/y-256/libdivsufsort
|
||||
|
||||
This product contains a modified portion of Nitsan Wakart's 'JCTools', Java Concurrency Tools for the JVM,
|
||||
which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jctools.txt (ASL2 License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/JCTools/JCTools
|
||||
|
||||
This product optionally depends on 'JZlib', a re-implementation of zlib in
|
||||
pure Java, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jzlib.txt (BSD style License)
|
||||
* HOMEPAGE:
|
||||
* http://www.jcraft.com/jzlib/
|
||||
|
||||
This product optionally depends on 'Compress-LZF', a Java library for encoding and
|
||||
decoding data in LZF format, written by Tatu Saloranta. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.compress-lzf.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/ning/compress
|
||||
|
||||
This product optionally depends on 'lz4', a LZ4 Java compression
|
||||
and decompression library written by Adrien Grand. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.lz4.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/jpountz/lz4-java
|
||||
|
||||
This product optionally depends on 'lzma-java', a LZMA Java compression
|
||||
and decompression library, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.lzma-java.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/jponge/lzma-java
|
||||
|
||||
This product contains a modified portion of 'jfastlz', a Java port of FastLZ compression
|
||||
and decompression library written by William Kinney. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jfastlz.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://code.google.com/p/jfastlz/
|
||||
|
||||
This product contains a modified portion of and optionally depends on 'Protocol Buffers', Google's data
|
||||
interchange format, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.protobuf.txt (New BSD License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/google/protobuf
|
||||
|
||||
This product optionally depends on 'Bouncy Castle Crypto APIs' to generate
|
||||
a temporary self-signed X.509 certificate when the JVM does not provide the
|
||||
equivalent functionality. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.bouncycastle.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://www.bouncycastle.org/
|
||||
|
||||
This product optionally depends on 'Snappy', a compression library produced
|
||||
by Google Inc, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.snappy.txt (New BSD License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/google/snappy
|
||||
|
||||
This product optionally depends on 'JBoss Marshalling', an alternative Java
|
||||
serialization API, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.jboss-marshalling.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/jboss-remoting/jboss-marshalling
|
||||
|
||||
This product optionally depends on 'Caliper', Google's micro-
|
||||
benchmarking framework, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.caliper.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/google/caliper
|
||||
|
||||
This product optionally depends on 'Apache Commons Logging', a logging
|
||||
framework, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.commons-logging.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://commons.apache.org/logging/
|
||||
|
||||
This product optionally depends on 'Apache Log4J', a logging framework, which
|
||||
can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.log4j.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://logging.apache.org/log4j/
|
||||
|
||||
This product optionally depends on 'Aalto XML', an ultra-high performance
|
||||
non-blocking XML processor, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.aalto-xml.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* http://wiki.fasterxml.com/AaltoHome
|
||||
|
||||
This product contains a modified version of 'HPACK', a Java implementation of
|
||||
the HTTP/2 HPACK algorithm written by Twitter. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.hpack.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/twitter/hpack
|
||||
|
||||
This product contains a modified version of 'HPACK', a Java implementation of
|
||||
the HTTP/2 HPACK algorithm written by Cory Benfield. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.hyper-hpack.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/python-hyper/hpack/
|
||||
|
||||
This product contains a modified version of 'HPACK', a Java implementation of
|
||||
the HTTP/2 HPACK algorithm written by Tatsuhiro Tsujikawa. It can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.nghttp2-hpack.txt (MIT License)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/nghttp2/nghttp2/
|
||||
|
||||
This product contains a modified portion of 'Apache Commons Lang', a Java library
|
||||
provides utilities for the java.lang API, which can be obtained at:
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.commons-lang.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://commons.apache.org/proper/commons-lang/
|
||||
|
||||
|
||||
This product contains the Maven wrapper scripts from 'Maven Wrapper', that provides an easy way to ensure a user has everything necessary to run the Maven build.
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.mvn-wrapper.txt (Apache License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://github.com/takari/maven-wrapper
|
||||
|
||||
This product contains the dnsinfo.h header file, that provides a way to retrieve the system DNS configuration on MacOS.
|
||||
This private header is also used by Apple's open source
|
||||
mDNSResponder (https://opensource.apple.com/tarballs/mDNSResponder/).
|
||||
|
||||
* LICENSE:
|
||||
* license/LICENSE.dnsinfo.txt (Apple Public Source License 2.0)
|
||||
* HOMEPAGE:
|
||||
* https://www.opensource.apple.com/source/configd/configd-453.19/dnsinfo/dnsinfo.h
|
||||
@@ -1,19 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name=local-console-sink
|
||||
connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector
|
||||
tasks.max=1
|
||||
topics=connect-test
|
||||
@@ -1,19 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name=local-console-source
|
||||
connector.class=org.apache.kafka.connect.file.FileStreamSourceConnector
|
||||
tasks.max=1
|
||||
topic=connect-test
|
||||
@@ -1,89 +0,0 @@
|
||||
##
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
##
|
||||
|
||||
# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended
|
||||
# to be used with the examples, and some settings may differ from those used in a production system, especially
|
||||
# the `bootstrap.servers` and those specifying replication factors.
|
||||
|
||||
# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
|
||||
bootstrap.servers=localhost:9092
|
||||
|
||||
# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs
|
||||
group.id=connect-cluster
|
||||
|
||||
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
|
||||
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
|
||||
key.converter=org.apache.kafka.connect.json.JsonConverter
|
||||
value.converter=org.apache.kafka.connect.json.JsonConverter
|
||||
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
|
||||
# it to
|
||||
key.converter.schemas.enable=true
|
||||
value.converter.schemas.enable=true
|
||||
|
||||
# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted.
|
||||
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
|
||||
# the topic before starting Kafka Connect if a specific topic configuration is needed.
|
||||
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
|
||||
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
|
||||
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
|
||||
offset.storage.topic=connect-offsets
|
||||
offset.storage.replication.factor=1
|
||||
#offset.storage.partitions=25
|
||||
|
||||
# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated,
|
||||
# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
|
||||
# the topic before starting Kafka Connect if a specific topic configuration is needed.
|
||||
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
|
||||
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
|
||||
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
|
||||
config.storage.topic=connect-configs
|
||||
config.storage.replication.factor=1
|
||||
|
||||
# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted.
|
||||
# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create
|
||||
# the topic before starting Kafka Connect if a specific topic configuration is needed.
|
||||
# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value.
|
||||
# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able
|
||||
# to run this example on a single-broker cluster and so here we instead set the replication factor to 1.
|
||||
status.storage.topic=connect-status
|
||||
status.storage.replication.factor=1
|
||||
#status.storage.partitions=5
|
||||
|
||||
# Flush much faster than normal, which is useful for testing/debugging
|
||||
offset.flush.interval.ms=10000
|
||||
|
||||
# List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS.
|
||||
# Specify hostname as 0.0.0.0 to bind to all interfaces.
|
||||
# Leave hostname empty to bind to default interface.
|
||||
# Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084"
|
||||
#listeners=HTTP://:8083
|
||||
|
||||
# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers.
|
||||
# If not set, it uses the value for "listeners" if configured.
|
||||
#rest.advertised.host.name=
|
||||
#rest.advertised.port=
|
||||
#rest.advertised.listener=
|
||||
|
||||
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
|
||||
# (connectors, converters, transformations). The list should consist of top level directories that include
|
||||
# any combination of:
|
||||
# a) directories immediately containing jars with plugins and their dependencies
|
||||
# b) uber-jars with plugins and their dependencies
|
||||
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
|
||||
# Examples:
|
||||
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
|
||||
#plugin.path=
|
||||
@@ -1,20 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name=local-file-sink
|
||||
connector.class=FileStreamSink
|
||||
tasks.max=1
|
||||
file=test.sink.txt
|
||||
topics=connect-test
|
||||
@@ -1,20 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name=local-file-source
|
||||
connector.class=FileStreamSource
|
||||
tasks.max=1
|
||||
file=test.txt
|
||||
topic=connect-test
|
||||
@@ -1,42 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
log4j.rootLogger=INFO, stdout, connectAppender
|
||||
|
||||
# Send the logs to the console.
|
||||
#
|
||||
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
|
||||
|
||||
# Send the logs to a file, rolling the file at midnight local time. For example, the `File` option specifies the
|
||||
# location of the log files (e.g. ${kafka.logs.dir}/connect.log), and at midnight local time the file is closed
|
||||
# and copied in the same directory but with a filename that ends in the `DatePattern` option.
|
||||
#
|
||||
log4j.appender.connectAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.connectAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.connectAppender.File=${kafka.logs.dir}/connect.log
|
||||
log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout
|
||||
|
||||
# The `%X{connector.context}` parameter in the layout includes connector-specific and task-specific information
|
||||
# in the log messages, where appropriate. This makes it easier to identify those log messages that apply to a
|
||||
# specific connector.
|
||||
#
|
||||
connect.log.pattern=[%d] %p %X{connector.context}%m (%c:%L)%n
|
||||
|
||||
log4j.appender.stdout.layout.ConversionPattern=${connect.log.pattern}
|
||||
log4j.appender.connectAppender.layout.ConversionPattern=${connect.log.pattern}
|
||||
|
||||
log4j.logger.org.apache.zookeeper=ERROR
|
||||
log4j.logger.org.reflections=ERROR
|
||||
@@ -1,59 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under A or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# see org.apache.kafka.clients.consumer.ConsumerConfig for more details
|
||||
|
||||
# Sample MirrorMaker 2.0 top-level configuration file
|
||||
# Run with ./bin/connect-mirror-maker.sh connect-mirror-maker.properties
|
||||
|
||||
# specify any number of cluster aliases
|
||||
clusters = A, B
|
||||
|
||||
# connection information for each cluster
|
||||
# This is a comma separated host:port pairs for each cluster
|
||||
# for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
|
||||
A.bootstrap.servers = A_host1:9092, A_host2:9092, A_host3:9092
|
||||
B.bootstrap.servers = B_host1:9092, B_host2:9092, B_host3:9092
|
||||
|
||||
# enable and configure individual replication flows
|
||||
A->B.enabled = true
|
||||
|
||||
# regex which defines which topics gets replicated. For eg "foo-.*"
|
||||
A->B.topics = .*
|
||||
|
||||
B->A.enabled = true
|
||||
B->A.topics = .*
|
||||
|
||||
# Setting replication factor of newly created remote topics
|
||||
replication.factor=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for mm2 internal topics "heartbeats", "B.checkpoints.internal" and
|
||||
# "mm2-offset-syncs.B.internal"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
checkpoints.topic.replication.factor=1
|
||||
heartbeats.topic.replication.factor=1
|
||||
offset-syncs.topic.replication.factor=1
|
||||
|
||||
# The replication factor for connect internal topics "mm2-configs.B.internal", "mm2-offsets.B.internal" and
|
||||
# "mm2-status.B.internal"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offset.storage.replication.factor=1
|
||||
status.storage.replication.factor=1
|
||||
config.storage.replication.factor=1
|
||||
|
||||
# customize as needed
|
||||
# replication.policy.separator = _
|
||||
# sync.topic.acls.enabled = false
|
||||
# emit.heartbeats.interval.seconds = 5
|
||||
@@ -1,41 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# These are defaults. This file just demonstrates how to override some settings.
|
||||
bootstrap.servers=localhost:9092
|
||||
|
||||
# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will
|
||||
# need to configure these based on the format they want their data in when loaded from or stored into Kafka
|
||||
key.converter=org.apache.kafka.connect.json.JsonConverter
|
||||
value.converter=org.apache.kafka.connect.json.JsonConverter
|
||||
# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply
|
||||
# it to
|
||||
key.converter.schemas.enable=true
|
||||
value.converter.schemas.enable=true
|
||||
|
||||
offset.storage.file.filename=/tmp/connect.offsets
|
||||
# Flush much faster than normal, which is useful for testing/debugging
|
||||
offset.flush.interval.ms=10000
|
||||
|
||||
# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins
|
||||
# (connectors, converters, transformations). The list should consist of top level directories that include
|
||||
# any combination of:
|
||||
# a) directories immediately containing jars with plugins and their dependencies
|
||||
# b) uber-jars with plugins and their dependencies
|
||||
# c) directories immediately containing the package directory structure of classes of plugins and their dependencies
|
||||
# Note: symlinks will be followed to discover dependencies or plugins.
|
||||
# Examples:
|
||||
# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors,
|
||||
#plugin.path=
|
||||
@@ -1,26 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# see org.apache.kafka.clients.consumer.ConsumerConfig for more details
|
||||
|
||||
# list of brokers used for bootstrapping knowledge about the rest of the cluster
|
||||
# format: host1:port1,host2:port2 ...
|
||||
bootstrap.servers=localhost:9092
|
||||
|
||||
# consumer group id
|
||||
group.id=test-consumer-group
|
||||
|
||||
# What to do when there is no initial offset in Kafka or if the current
|
||||
# offset does not exist any more on the server: latest, earliest, none
|
||||
#auto.offset.reset=
|
||||
173
thirdparty/kafka_2.13-3.1.0/config/kraft/README.md
vendored
@@ -1,173 +0,0 @@
|
||||
KRaft (aka KIP-500) mode Preview Release
|
||||
=========================================================
|
||||
|
||||
# Introduction
|
||||
It is now possible to run Apache Kafka without Apache ZooKeeper! We call this the [Kafka Raft metadata mode](https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum), typically shortened to `KRaft mode`.
|
||||
`KRaft` is intended to be pronounced like `craft` (as in `craftsmanship`). It is currently *PREVIEW AND SHOULD NOT BE USED IN PRODUCTION*, but it
|
||||
is available for testing in the Kafka 3.1 release.
|
||||
|
||||
When the Kafka cluster is in KRaft mode, it does not store its metadata in ZooKeeper. In fact, you do not have to run ZooKeeper at all, because it stores its metadata in a KRaft quorum of controller nodes.
|
||||
|
||||
KRaft mode has many benefits -- some obvious, and some not so obvious. Clearly, it is nice to manage and configure one service rather than two services. In addition, you can now run a single process Kafka cluster.
|
||||
Most important of all, KRaft mode is more scalable. We expect to be able to [support many more topics and partitions](https://www.confluent.io/kafka-summit-san-francisco-2019/kafka-needs-no-keeper/) in this mode.
|
||||
|
||||
# Quickstart
|
||||
|
||||
## Warning
|
||||
KRaft mode in Kafka 3.1 is provided for testing only, *NOT* for production. We do not yet support upgrading existing ZooKeeper-based Kafka clusters into this mode.
|
||||
There may be bugs, including serious ones. You should *assume that your data could be lost at any time* if you try the preview release of KRaft mode.
|
||||
|
||||
## Generate a cluster ID
|
||||
The first step is to generate an ID for your new cluster, using the kafka-storage tool:
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-storage.sh random-uuid
|
||||
xtzWWN4bTjitpL3kfd9s5g
|
||||
~~~~
|
||||
|
||||
## Format Storage Directories
|
||||
The next step is to format your storage directories. If you are running in single-node mode, you can do this with one command:
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
|
||||
Formatting /tmp/kraft-combined-logs
|
||||
~~~~
|
||||
|
||||
If you are using multiple nodes, then you should run the format command on each node. Be sure to use the same cluster ID for each one.
|
||||
|
||||
## Start the Kafka Server
|
||||
Finally, you are ready to start the Kafka server on each node.
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-server-start.sh ./config/kraft/server.properties
|
||||
[2021-02-26 15:37:11,071] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
|
||||
[2021-02-26 15:37:11,294] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
|
||||
[2021-02-26 15:37:11,466] INFO [Log partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
|
||||
[2021-02-26 15:37:11,509] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
|
||||
[2021-02-26 15:37:11,640] INFO [RaftManager nodeId=1] Completed transition to Unattached(epoch=0, voters=[1], electionTimeoutMs=9037) (org.apache.kafka.raft.QuorumState)
|
||||
...
|
||||
~~~~
|
||||
|
||||
Just like with a ZooKeeper based broker, you can connect to port 9092 (or whatever port you configured) to perform administrative operations or produce or consume data.
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-topics.sh --create --topic foo --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
|
||||
Created topic foo.
|
||||
~~~~
|
||||
|
||||
# Deployment
|
||||
|
||||
## Controller Servers
|
||||
In KRaft mode, only a small group of specially selected servers can act as controllers (unlike the ZooKeeper-based mode, where any server can become the
|
||||
Controller). The specially selected controller servers will participate in the metadata quorum. Each controller server is either active, or a hot
|
||||
standby for the current active controller server.
|
||||
|
||||
You will typically select 3 or 5 servers for this role, depending on factors like cost and the number of concurrent failures your system should withstand
|
||||
without availability impact. Just like with ZooKeeper, you must keep a majority of the controllers alive in order to maintain availability. So if you have 3
|
||||
controllers, you can tolerate 1 failure; with 5 controllers, you can tolerate 2 failures.
|
||||
|
||||
## Process Roles
|
||||
Each Kafka server now has a new configuration key called `process.roles` which can have the following values:
|
||||
|
||||
* If `process.roles` is set to `broker`, the server acts as a broker in KRaft mode.
|
||||
* If `process.roles` is set to `controller`, the server acts as a controller in KRaft mode.
|
||||
* If `process.roles` is set to `broker,controller`, the server acts as both a broker and a controller in KRaft mode.
|
||||
* If `process.roles` is not set at all then we are assumed to be in ZooKeeper mode. As mentioned earlier, you can't currently transition back and forth between ZooKeeper mode and KRaft mode without reformatting.
|
||||
|
||||
Nodes that act as both brokers and controllers are referred to as "combined" nodes. Combined nodes are simpler to operate for simple use cases and allow you to avoid
|
||||
some fixed memory overheads associated with JVMs. The key disadvantage is that the controller will be less isolated from the rest of the system. For example, if activity on the broker causes an out of
|
||||
memory condition, the controller part of the server is not isolated from that OOM condition.
|
||||
|
||||
## Quorum Voters
|
||||
All nodes in the system must set the `controller.quorum.voters` configuration. This identifies the quorum controller servers that should be used. All the controllers must be enumerated.
|
||||
This is similar to how, when using ZooKeeper, the `zookeeper.connect` configuration must contain all the ZooKeeper servers. Unlike with the ZooKeeper config, however, `controller.quorum.voters`
|
||||
also has IDs for each node. The format is id1@host1:port1,id2@host2:port2, etc.
|
||||
|
||||
So if you have 10 brokers and 3 controllers named controller1, controller2, controller3, you might have the following configuration on controller1:
|
||||
```
|
||||
process.roles=controller
|
||||
node.id=1
|
||||
listeners=CONTROLLER://controller1.example.com:9093
|
||||
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093
|
||||
```
|
||||
|
||||
Each broker and each controller must set `controller.quorum.voters`. Note that the node ID supplied in the `controller.quorum.voters` configuration must match that supplied to the server.
|
||||
So on controller1, node.id must be set to 1, and so forth. Note that there is no requirement for controller IDs to start at 0 or 1. However, the easiest and least confusing way to allocate
|
||||
node IDs is probably just to give each server a numeric ID, starting from 0.
|
||||
|
||||
Note that clients never need to configure `controller.quorum.voters`; only servers do.
|
||||
|
||||
## Kafka Storage Tool
|
||||
As described above in the QuickStart section, you must use the `kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then run the format command on each node before starting the node.
|
||||
|
||||
This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster UUID automatically. One reason for the change
|
||||
is that auto-formatting can sometimes obscure an error condition. For example, under UNIX, if a data directory can't be mounted, it may show up as blank. In this case, auto-formatting would be the wrong thing to do.
|
||||
|
||||
This is particularly important for the metadata log maintained by the controller servers. If two controllers out of three controllers were able to start with blank logs, a leader might be able to be elected with
|
||||
nothing in the log, which would cause all metadata to be lost.
|
||||
|
||||
# Missing Features
|
||||
We don't support any kind of upgrade right now, either to or from KRaft mode. This is an important gap that we are working on.
|
||||
|
||||
Finally, the following Kafka features have not yet been fully implemented:
|
||||
|
||||
* Support for certain security features: configuring a KRaft-based Authorizer, setting up SCRAM, delegation tokens, and so forth
|
||||
(although note that you can use authorizers such as `kafka.security.authorizer.AclAuthorizer` with KRaft clusters, even
|
||||
if they are ZooKeeper-based: simply define `authorizer.class.name` and configure the authorizer as you normally would).
|
||||
* Support for some configurations, like enabling unclean leader election by default or dynamically changing broker endpoints
|
||||
* Support for KIP-112 "JBOD" modes
|
||||
|
||||
We've tried to make it clear when a feature is not supported in the preview release, but you may encounter some rough edges. We will cover these feature gaps incrementally in the `trunk` branch.
|
||||
|
||||
# Debugging
|
||||
If you encounter an issue, you might want to take a look at the metadata log.
|
||||
|
||||
## kafka-dump-log
|
||||
One way to view the metadata log is with kafka-dump-log.sh tool, like so:
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-dump-log.sh --cluster-metadata-decoder --skip-record-metadata --files /tmp/kraft-combined-logs/__cluster_metadata-0/*.log
|
||||
Dumping /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log
|
||||
Starting offset: 0
|
||||
baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: true position: 0 CreateTime: 1614382631640 size: 89 magic: 2 compresscodec: NONE crc: 1438115474 isvalid: true
|
||||
|
||||
baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 89 CreateTime: 1614382632329 size: 137 magic: 2 compresscodec: NONE crc: 1095855865 isvalid: true
|
||||
payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"P3UFsWoNR-erL9PK98YLsA","brokerEpoch":0,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}}
|
||||
baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 226 CreateTime: 1614382632453 size: 83 magic: 2 compresscodec: NONE crc: 455187130 isvalid: true
|
||||
payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}}
|
||||
baseOffset: 3 lastOffset: 3 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 309 CreateTime: 1614382634484 size: 83 magic: 2 compresscodec: NONE crc: 4055692847 isvalid: true
|
||||
payload: {"type":"FENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}}
|
||||
baseOffset: 4 lastOffset: 4 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: true position: 392 CreateTime: 1614382671857 size: 89 magic: 2 compresscodec: NONE crc: 1318571838 isvalid: true
|
||||
|
||||
baseOffset: 5 lastOffset: 5 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 481 CreateTime: 1614382672440 size: 137 magic: 2 compresscodec: NONE crc: 841144615 isvalid: true
|
||||
payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"RXRJu7cnScKRZOnWQGs86g","brokerEpoch":4,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}}
|
||||
baseOffset: 6 lastOffset: 6 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 618 CreateTime: 1614382672544 size: 83 magic: 2 compresscodec: NONE crc: 4155905922 isvalid: true
|
||||
payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":4}}
|
||||
baseOffset: 7 lastOffset: 8 count: 2 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 701 CreateTime: 1614382712158 size: 159 magic: 2 compresscodec: NONE crc: 3726758683 isvalid: true
|
||||
payload: {"type":"TOPIC_RECORD","version":0,"data":{"name":"foo","topicId":"5zoAlv-xEh9xRANKXt1Lbg"}}
|
||||
payload: {"type":"PARTITION_RECORD","version":0,"data":{"partitionId":0,"topicId":"5zoAlv-xEh9xRANKXt1Lbg","replicas":[1],"isr":[1],"removingReplicas":null,"addingReplicas":null,"leader":1,"leaderEpoch":0,"partitionEpoch":0}}
|
||||
~~~~
|
||||
|
||||
## The Metadata Shell
|
||||
Another tool for examining the metadata logs is the Kafka metadata shell. Just like the ZooKeeper shell, this allows you to inspect the metadata of the cluster.
|
||||
|
||||
~~~~
|
||||
$ ./bin/kafka-metadata-shell.sh --snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log
|
||||
>> ls /
|
||||
brokers local metadataQuorum topicIds topics
|
||||
>> ls /topics
|
||||
foo
|
||||
>> cat /topics/foo/0/data
|
||||
{
|
||||
"partitionId" : 0,
|
||||
"topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
|
||||
"replicas" : [ 1 ],
|
||||
"isr" : [ 1 ],
|
||||
"removingReplicas" : null,
|
||||
"addingReplicas" : null,
|
||||
"leader" : 1,
|
||||
"leaderEpoch" : 0,
|
||||
"partitionEpoch" : 0
|
||||
}
|
||||
>> exit
|
||||
~~~~
|
||||
@@ -1,128 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# This configuration file is intended for use in KRaft mode, where
|
||||
# Apache ZooKeeper is not present. See config/kraft/README.md for details.
|
||||
#
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The role of this server. Setting this puts us in KRaft mode
|
||||
process.roles=broker
|
||||
|
||||
# The node id associated with this instance's roles
|
||||
node.id=2
|
||||
|
||||
# The connect string for the controller quorum
|
||||
controller.quorum.voters=1@localhost:9093
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
listeners=PLAINTEXT://localhost:9092
|
||||
inter.broker.listener.name=PLAINTEXT
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
advertised.listeners=PLAINTEXT://localhost:9092
|
||||
|
||||
# Listener, host name, and port for the controller to advertise to the brokers. If
|
||||
# this server is a controller, this listener must be configured.
|
||||
controller.listener.names=CONTROLLER
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kraft-broker-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
@@ -1,127 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# This configuration file is intended for use in KRaft mode, where
|
||||
# Apache ZooKeeper is not present. See config/kraft/README.md for details.
|
||||
#
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The role of this server. Setting this puts us in KRaft mode
|
||||
process.roles=controller
|
||||
|
||||
# The node id associated with this instance's roles
|
||||
node.id=1
|
||||
|
||||
# The connect string for the controller quorum
|
||||
controller.quorum.voters=1@localhost:9093
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
listeners=PLAINTEXT://:9093
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
#advertised.listeners=PLAINTEXT://your.host.name:9092
|
||||
|
||||
# Listener, host name, and port for the controller to advertise to the brokers. If
|
||||
# this server is a controller, this listener must be configured.
|
||||
controller.listener.names=PLAINTEXT
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kraft-controller-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
@@ -1,128 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#
|
||||
# This configuration file is intended for use in KRaft mode, where
|
||||
# Apache ZooKeeper is not present. See config/kraft/README.md for details.
|
||||
#
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The role of this server. Setting this puts us in KRaft mode
|
||||
process.roles=broker,controller
|
||||
|
||||
# The node id associated with this instance's roles
|
||||
node.id=1
|
||||
|
||||
# The connect string for the controller quorum
|
||||
controller.quorum.voters=1@localhost:9093
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
|
||||
inter.broker.listener.name=PLAINTEXT
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
advertised.listeners=PLAINTEXT://localhost:9092
|
||||
|
||||
# Listener, host name, and port for the controller to advertise to the brokers. If
|
||||
# this server is a controller, this listener must be configured.
|
||||
controller.listener.names=CONTROLLER
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kraft-combined-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
@@ -1,91 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Unspecified loggers and loggers with additivity=true output to server.log and stdout
|
||||
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
|
||||
log4j.rootLogger=INFO, stdout, kafkaAppender
|
||||
|
||||
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
|
||||
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
|
||||
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
|
||||
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
|
||||
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
|
||||
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
|
||||
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
|
||||
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
|
||||
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
|
||||
# Change the line below to adjust ZK client logging
|
||||
log4j.logger.org.apache.zookeeper=INFO
|
||||
|
||||
# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
|
||||
log4j.logger.kafka=INFO
|
||||
log4j.logger.org.apache.kafka=INFO
|
||||
|
||||
# Change to DEBUG or TRACE to enable request logging
|
||||
log4j.logger.kafka.request.logger=WARN, requestAppender
|
||||
log4j.additivity.kafka.request.logger=false
|
||||
|
||||
# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
|
||||
# related to the handling of requests
|
||||
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
|
||||
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
|
||||
#log4j.additivity.kafka.server.KafkaApis=false
|
||||
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
|
||||
log4j.additivity.kafka.network.RequestChannel$=false
|
||||
|
||||
log4j.logger.kafka.controller=TRACE, controllerAppender
|
||||
log4j.additivity.kafka.controller=false
|
||||
|
||||
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
|
||||
log4j.additivity.kafka.log.LogCleaner=false
|
||||
|
||||
log4j.logger.state.change.logger=INFO, stateChangeAppender
|
||||
log4j.additivity.state.change.logger=false
|
||||
|
||||
# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses
|
||||
log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
|
||||
log4j.additivity.kafka.authorizer.logger=false
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# see org.apache.kafka.clients.producer.ProducerConfig for more details
|
||||
|
||||
############################# Producer Basics #############################
|
||||
|
||||
# list of brokers used for bootstrapping knowledge about the rest of the cluster
|
||||
# format: host1:port1,host2:port2 ...
|
||||
bootstrap.servers=localhost:9092
|
||||
|
||||
# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
|
||||
compression.type=none
|
||||
|
||||
# name of the partitioner class for partitioning events; default partition spreads data randomly
|
||||
#partitioner.class=
|
||||
|
||||
# the maximum amount of time the client will wait for the response of a request
|
||||
#request.timeout.ms=
|
||||
|
||||
# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
|
||||
#max.block.ms=
|
||||
|
||||
# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
|
||||
#linger.ms=
|
||||
|
||||
# the maximum size of a request in bytes
|
||||
#max.request.size=
|
||||
|
||||
# the default batch size in bytes when batching multiple records sent to a partition
|
||||
#batch.size=
|
||||
|
||||
# the total bytes of memory the producer can use to buffer records waiting to be sent to the server
|
||||
#buffer.memory=
|
||||
136
thirdparty/kafka_2.13-3.1.0/config/server.properties
vendored
@@ -1,136 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# see kafka.server.KafkaConfig for additional details and defaults
|
||||
|
||||
############################# Server Basics #############################
|
||||
|
||||
# The id of the broker. This must be set to a unique integer for each broker.
|
||||
broker.id=0
|
||||
|
||||
############################# Socket Server Settings #############################
|
||||
|
||||
# The address the socket server listens on. It will get the value returned from
|
||||
# java.net.InetAddress.getCanonicalHostName() if not configured.
|
||||
# FORMAT:
|
||||
# listeners = listener_name://host_name:port
|
||||
# EXAMPLE:
|
||||
# listeners = PLAINTEXT://your.host.name:9092
|
||||
#listeners=PLAINTEXT://:9092
|
||||
|
||||
# Hostname and port the broker will advertise to producers and consumers. If not set,
|
||||
# it uses the value for "listeners" if configured. Otherwise, it will use the value
|
||||
# returned from java.net.InetAddress.getCanonicalHostName().
|
||||
#advertised.listeners=PLAINTEXT://your.host.name:9092
|
||||
|
||||
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
|
||||
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
|
||||
|
||||
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
|
||||
num.network.threads=3
|
||||
|
||||
# The number of threads that the server uses for processing requests, which may include disk I/O
|
||||
num.io.threads=8
|
||||
|
||||
# The send buffer (SO_SNDBUF) used by the socket server
|
||||
socket.send.buffer.bytes=102400
|
||||
|
||||
# The receive buffer (SO_RCVBUF) used by the socket server
|
||||
socket.receive.buffer.bytes=102400
|
||||
|
||||
# The maximum size of a request that the socket server will accept (protection against OOM)
|
||||
socket.request.max.bytes=104857600
|
||||
|
||||
|
||||
############################# Log Basics #############################
|
||||
|
||||
# A comma separated list of directories under which to store log files
|
||||
log.dirs=/tmp/kafka-logs
|
||||
|
||||
# The default number of log partitions per topic. More partitions allow greater
|
||||
# parallelism for consumption, but this will also result in more files across
|
||||
# the brokers.
|
||||
num.partitions=1
|
||||
|
||||
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
|
||||
# This value is recommended to be increased for installations with data dirs located in RAID array.
|
||||
num.recovery.threads.per.data.dir=1
|
||||
|
||||
############################# Internal Topic Settings #############################
|
||||
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
|
||||
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
|
||||
offsets.topic.replication.factor=1
|
||||
transaction.state.log.replication.factor=1
|
||||
transaction.state.log.min.isr=1
|
||||
|
||||
############################# Log Flush Policy #############################
|
||||
|
||||
# Messages are immediately written to the filesystem but by default we only fsync() to sync
|
||||
# the OS cache lazily. The following configurations control the flush of data to disk.
|
||||
# There are a few important trade-offs here:
|
||||
# 1. Durability: Unflushed data may be lost if you are not using replication.
|
||||
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
|
||||
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
|
||||
# The settings below allow one to configure the flush policy to flush data after a period of time or
|
||||
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
|
||||
|
||||
# The number of messages to accept before forcing a flush of data to disk
|
||||
#log.flush.interval.messages=10000
|
||||
|
||||
# The maximum amount of time a message can sit in a log before we force a flush
|
||||
#log.flush.interval.ms=1000
|
||||
|
||||
############################# Log Retention Policy #############################
|
||||
|
||||
# The following configurations control the disposal of log segments. The policy can
|
||||
# be set to delete segments after a period of time, or after a given size has accumulated.
|
||||
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
|
||||
# from the end of the log.
|
||||
|
||||
# The minimum age of a log file to be eligible for deletion due to age
|
||||
log.retention.hours=168
|
||||
|
||||
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
|
||||
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
|
||||
#log.retention.bytes=1073741824
|
||||
|
||||
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
|
||||
log.segment.bytes=1073741824
|
||||
|
||||
# The interval at which log segments are checked to see if they can be deleted according
|
||||
# to the retention policies
|
||||
log.retention.check.interval.ms=300000
|
||||
|
||||
############################# Zookeeper #############################
|
||||
|
||||
# Zookeeper connection string (see zookeeper docs for details).
|
||||
# This is a comma separated host:port pairs, each corresponding to a zk
|
||||
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
|
||||
# You can also append an optional chroot string to the urls to specify the
|
||||
# root directory for all kafka znodes.
|
||||
zookeeper.connect=localhost:2181
|
||||
|
||||
# Timeout in ms for connecting to zookeeper
|
||||
zookeeper.connection.timeout.ms=18000
|
||||
|
||||
|
||||
############################# Group Coordinator Settings #############################
|
||||
|
||||
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
|
||||
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
|
||||
# The default value for this is 3 seconds.
|
||||
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
|
||||
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
|
||||
group.initial.rebalance.delay.ms=0
|
||||
@@ -1,21 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
log4j.rootLogger=WARN, stderr
|
||||
|
||||
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
|
||||
log4j.appender.stderr.Target=System.err
|
||||
25
thirdparty/kafka_2.13-3.1.0/config/trogdor.conf
vendored
@@ -1,25 +0,0 @@
|
||||
{
|
||||
"_comment": [
|
||||
"Licensed to the Apache Software Foundation (ASF) under one or more",
|
||||
"contributor license agreements. See the NOTICE file distributed with",
|
||||
"this work for additional information regarding copyright ownership.",
|
||||
"The ASF licenses this file to You under the Apache License, Version 2.0",
|
||||
"(the \"License\"); you may not use this file except in compliance with",
|
||||
"the License. You may obtain a copy of the License at",
|
||||
"",
|
||||
"http://www.apache.org/licenses/LICENSE-2.0",
|
||||
"",
|
||||
"Unless required by applicable law or agreed to in writing, software",
|
||||
"distributed under the License is distributed on an \"AS IS\" BASIS,",
|
||||
"WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.",
|
||||
"See the License for the specific language governing permissions and",
|
||||
"limitations under the License."
|
||||
],
|
||||
"platform": "org.apache.kafka.trogdor.basic.BasicPlatform", "nodes": {
|
||||
"node0": {
|
||||
"hostname": "localhost",
|
||||
"trogdor.agent.port": 8888,
|
||||
"trogdor.coordinator.port": 8889
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,24 +0,0 @@
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# the directory where the snapshot is stored.
|
||||
dataDir=/tmp/zookeeper
|
||||
# the port at which the clients will connect
|
||||
clientPort=2181
|
||||
# disable the per-ip limit on the number of connections since this is a non-production config
|
||||
maxClientCnxns=0
|
||||
# Disable the adminserver by default to avoid port conflicts.
|
||||
# Set the port to something non-conflicting if choosing to enable this
|
||||
admin.enableServer=false
|
||||
# admin.serverPort=8080
|
||||