diff --git a/.gitignore b/.gitignore index 5f1648a..960e1f8 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ HELP.md +thirdparty/ .gradle build/ src/main/generated/ diff --git a/document/kafka/connect.md b/document/kafka/connect.md new file mode 100644 index 0000000..8d9ff59 --- /dev/null +++ b/document/kafka/connect.md @@ -0,0 +1,218 @@ +[이전 장(링크)](https://imprint.tistory.com/232) 에서는 `Kafka`의 Producer와 Consumer를 활용하여 데이터를 전송하는 방법에 대해서 알아보았다. +이번 장에서는 `Kafka`의 `Connect`를 사용하기 위해 설정하는 방법에 대해서 알아본다. +모든 소스 코드는 [깃 허브 (링크)](https://github.com/roy-zz/spring-cloud) 에 올려두었다. + +--- + +### Kafka Connect + +우리는 일반적으로 데이터를 어느 한 쪽에서 다른 쪽으로 이동시키기 위해 코드를 작성해서 하나씩 전달한다. +하지만 `Kafka Connect`를 사용하면 코드 작성없이 Configuration 정보를 통해서 데이터의 Import와 Export가 가능해진다. +단일 클러스터를 위한 `Standalone mode`와 다중 클러스터를 위한 `Distribution mode`를 지원한다. +RESTful API를 통해 이러한 기능들을 지원하며 Stream 또는 Batch 형태로 데이터의 전송이 가능하다. +여러 Plugin을 통해서 S3, Hive 등과 같이 다양한 엔드포인트를 제공하고 있다. + +![](connect_image/connector-flow.png) + +우리는 이러한 Connect 기능을 사용하기 위해 주문 서비스에 MariaDB를 설치하고 Connect를 연동해 볼 것이다. + +--- + +### Order Service + +#### MariaDB 설치 + +프로젝트를 실행시키는 macOS 환경에 MariaDB를 설치해본다. + +1. MariaDB 설치 + +아래의 커맨드를 입력하여 `Brew`를 통해 MariaDB를 설치해본다. + +```bash +$ brew install mariadb +``` + +아래의 이미지와 같이 출력된다면 정상적으로 설치가 완료된 것이다. + +![](connect_image/installed-mariadb.png) + +2. MariaDB 실행 + +MariaDB를 실행시키는 방법은 여러가지가 있지만 `Brew`를 통해 설치하였기 때문에 실행 또한 `Brew`를 통해서 진행한다. + +```bash +$ brew services start mariadb +``` + +![](connect_image/started-mariadb.png) + +**MariaDB 종료**: ```bash $ brew services stop mariadb``` +**MariaDB 상태 조회**: ```bash $ brew services info mariadb``` + +3. 접속 + +아래의 커맨드를 입력하여 정상적으로 접속되는지 확인해본다. + +```bash +$ mysql -uroot +``` + +만약 필자와 같이 `Access denied` 오류가 발생한다면 아래와 같이 해결한다. + +```bash +$ sudo mysql -u root +$ select user, host, plugin FROM mysql.user; +$ set password for 'root'@'localhost'=password('root'); +$ flush privileges; +$ exit +``` + +비밀번호를 지정하는 부분은 편한 비밀번호를 지정하면 된다. +아래와 같이 비밀번호를 -p를 추가로 입력하고 다시 접속하면 비밀번호를 입력하라는 화면이 나온다. +위에서 지정한 비밀번호를 입력하고 접속한다. + +![](connect_image/renew-mariadb-password.png) + +4. database 생성 + +아래의 커맨드를 입력하여 주문 서비스에서 사용할 `mydb`라는 데이터베이스를 생성한다. + +```bash +$ create database mydb; +``` + +--- + +### 주문 서비스 + +기존에 주문 서비스는 임베디드 형식의 H2 DB를 사용하고 있었다. +Kafka의 Connect 기능을 사용하기 위해 위에서 설치한 MariaDB를 사용하도록 수정한다. + +1. 의존성 추가 + +build.gradle 파일에 아래와 같이 MariaDB 클라이언트 의존성을 추가한다. + +```bash +implementation 'org.mariadb.jdbc:mariadb-java-client:3.0.4' +``` + +2. application.yml 파일 수정 + +우리는 이전 단계에서 MariaDB를 생성하고 mydb라는 데이터베이스를 생성하였다. +관리자 계정의 비밀번호도 `root`로 변경해두었다. +주문 서비스에서도 같은 정보로 접속할 수 있도록 `application.yml` 파일을 수정한다. + +```yaml +# 생략 + h2: + console: + enabled: true + settings: + web-allow-others: true + path: /h2-console + jpa: + hibernate: + ddl-auto: update + datasource: + driver-class-name: org.mariadb.jdbc.Driver + url: jdbc:mariadb://localhost:3306/mydb + username: root + password: root +# 생략 +``` + +수정이 완료되었으면 주문 서비스를 재기동한다. + +3. h2-console 접속 + +이름은 h2-console이지만 우리는 이제 h2-console을 통해서 MariaDB에 접속할 것이다. +아래의 이미지와 같이 설정값을 수정한다. 비밀번호는 각자 설정한 비밀번호를 입력한다. + +![](connect_image/h2-console-set-mariadb.png) + +--- + +### Connect 설치 + +필자는 본 문서 작성일 기준 최신 버전인 7.1.1 버전을 설치한다. +최신 버전에 대한 정보가 궁금하다면 [여기](https://docs.confluent.io/platform/current/release-notes/index.html) 로 이동하여 정보를 확인한다. + +1. confluent-community 다운로드 + +웹 브라우저에 아래의 주소를 입력하여 confluent-community를 다운로드 받는다. + +`https://packages.confluent.io/archive/6.1/confluent-community-7.1.1.tar.gz` + +2. confluent 다운로드 파일 압축 해제 + +다운로드한 파일의 압축을 풀어준다. +필자는 `Kafka`를 설치한 경로에 같이 압축을 풀었으며 해당 파일은 필자의 [깃 리포지토리](https://github.com/roy-zz/spring-cloud) 에 올려두었다. + +![](connect_image/add-confluent.png) + +3. connect 실행 + +confluent를 설치하고 압축 해제한 경로로 이동하여 아래의 커맨드를 입력하여 `connect`를 실행시킨다. + +```bash +$ ./bin/connect-distributed ./etc/kafka/connect-distributed.properties +``` + +![](connect_image/start-connect.png) + +4. connect 정상 실행 확인 + +`connect`가 정상적으로 실행되었다면 `Kafka` 토픽에 우리가 보지 못했던 정보가 추가되어야 한다. +아래의 커맨드를 입력하여 Kafka의 `Topic`목록을 확인해본다. + +```bash +$ ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list +``` + +정상적으로 실행되었다면 아래와 같이 `connect` 관련 토픽이 추가되어야 한다. + +![](connect_image/add-topic-for-connect.png) + +5. JDBC Connector 다운로드 + +웹 브라우저에 아래의 주소를 입력하여 최신 kafka-connect-jdbc 파일을 다운로드 받는다. + +`https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc?_ga=2.191287913.2138291622.1650982991-168732119.1650982991&_gac=1.155404745.1650983004.CjwKCAjwsJ6TBhAIEiwAfl4TWMy4ml6lIbqaD-wfx8ub7GcZj_UMeqdqO2G0s-RU4-dARlahsLcySBoC9oAQAvD_BwE` + +6. JDBC Connector 압축 해제 + +다운로드한 파일의 압축을 풀어준다. +confluent와 동일한 경로에 압축을 풀어주었으며 같은 방식으로 관리된다. + +![](connect_image/add-kafka-connect-jdbc.png) + +7. JDBC lib 경로 지정 + +아래의 이미지와 같이 `confluentinc-kafka-connect-jdbc` 경로의 `lib` 디렉토리 경로를 확인한다. + +![](connect_image/copy-lib-directory.png) + +{confluent 설치경로}/etc/kafka/ 경로의 connect-distributed.properties 파일을 아래와 같이 수정한다. + +![](connect_image/modify-plugin-path.png) + +8. MariaDB Java Client 다운로드 + +브라우저에서 아래의 경로로 이동하여 MariaDB Java Client jar파일을 다운로드 받는다. + +`https://mvnrepository.com/artifact/org.mariadb.jdbc/mariadb-java-client/3.0.4` + +다운로드한 jar 파일을 아래의 이미지와 같이 ${Confluent 설치경로}/share/java/kafka 경로로 이동시킨다. + +![](connect_image/add-mariadb-java-client.png) + +--- + +지금까지 Kafka의 Connect를 사용하기 위한 모든 준비가 완료되었다. +다음 장에서는 본격적으로 `Source Connect`와 `Sink Connect`를 사용하는 방법에 대해서 알아본다. + +--- + +**참고한 강의:** + +- https://www.inflearn.com/course/%EC%8A%A4%ED%94%84%EB%A7%81-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C-%EB%A7%88%EC%9D%B4%ED%81%AC%EB%A1%9C%EC%84%9C%EB%B9%84%EC%8A%A4 \ No newline at end of file diff --git a/document/kafka/connect_image/add-confluent.png b/document/kafka/connect_image/add-confluent.png new file mode 100644 index 0000000..6e5b7eb Binary files /dev/null and b/document/kafka/connect_image/add-confluent.png differ diff --git a/document/kafka/connect_image/add-kafka-connect-jdbc.png b/document/kafka/connect_image/add-kafka-connect-jdbc.png new file mode 100644 index 0000000..6c163e6 Binary files /dev/null and b/document/kafka/connect_image/add-kafka-connect-jdbc.png differ diff --git a/document/kafka/connect_image/add-mariadb-java-client.png b/document/kafka/connect_image/add-mariadb-java-client.png new file mode 100644 index 0000000..9f00f51 Binary files /dev/null and b/document/kafka/connect_image/add-mariadb-java-client.png differ diff --git a/document/kafka/connect_image/add-topic-for-connect.png b/document/kafka/connect_image/add-topic-for-connect.png new file mode 100644 index 0000000..9926a06 Binary files /dev/null and b/document/kafka/connect_image/add-topic-for-connect.png differ diff --git a/document/kafka/connect_image/connector-flow.png b/document/kafka/connect_image/connector-flow.png new file mode 100644 index 0000000..b47c972 Binary files /dev/null and b/document/kafka/connect_image/connector-flow.png differ diff --git a/document/kafka/connect_image/copy-lib-directory.png b/document/kafka/connect_image/copy-lib-directory.png new file mode 100644 index 0000000..af904c3 Binary files /dev/null and b/document/kafka/connect_image/copy-lib-directory.png differ diff --git a/document/kafka/connect_image/h2-console-set-mariadb.png b/document/kafka/connect_image/h2-console-set-mariadb.png new file mode 100644 index 0000000..b254fe0 Binary files /dev/null and b/document/kafka/connect_image/h2-console-set-mariadb.png differ diff --git a/document/kafka/connect_image/installed-mariadb.png b/document/kafka/connect_image/installed-mariadb.png new file mode 100644 index 0000000..59a103e Binary files /dev/null and b/document/kafka/connect_image/installed-mariadb.png differ diff --git a/document/kafka/connect_image/modify-plugin-path.png b/document/kafka/connect_image/modify-plugin-path.png new file mode 100644 index 0000000..0afe4b0 Binary files /dev/null and b/document/kafka/connect_image/modify-plugin-path.png differ diff --git a/document/kafka/connect_image/renew-mariadb-password.png b/document/kafka/connect_image/renew-mariadb-password.png new file mode 100644 index 0000000..9a6cff0 Binary files /dev/null and b/document/kafka/connect_image/renew-mariadb-password.png differ diff --git a/document/kafka/connect_image/start-connect.png b/document/kafka/connect_image/start-connect.png new file mode 100644 index 0000000..d7353f5 Binary files /dev/null and b/document/kafka/connect_image/start-connect.png differ diff --git a/document/kafka/connect_image/started-mariadb.png b/document/kafka/connect_image/started-mariadb.png new file mode 100644 index 0000000..6a865d9 Binary files /dev/null and b/document/kafka/connect_image/started-mariadb.png differ diff --git a/document/kafka/sink_source.md b/document/kafka/sink_source.md new file mode 100644 index 0000000..b8b410a --- /dev/null +++ b/document/kafka/sink_source.md @@ -0,0 +1,127 @@ +[이전 장(링크)](https://imprint.tistory.com/233) 에서는 `Kafka의 Connect`를 설치하는 방법에 대해서 알아보았다. +이번 장에서는 `Source Connect`와 `Sink Connect`를 사용하는 방법에 대해서 알아본다. +모든 소스 코드는 [깃 허브 (링크)](https://github.com/roy-zz/spring-cloud) 에 올려두었다. + +--- + +우리는 MariaDB의 `Connect Source`와 `Connect Sink`를 사용하여 아래와 같은 구조를 구축할 것이다. + +![](sink_source_image/connector-flow.png) + +`Connect Source`는 데이터를 제공하는 쪽으로부터 데이터를 전달받아 Kafka Cluster로 전달하는 역할을 하고 `Connect Sink`는 Kafka Cluster를 대상 저장소에 전달하는 역할을 한다. +이전에 Producer와 Consumer를 사용하여 Kafka Cluster를 통해서 데이터를 제공하고 소비하는 작업을 진행했었다. +이번에는 Connect Source와 Connect Sink를 사용하여 데이터를 제공하고 소비하는 작업을 진행해본다. + +### Source Connect + +우리는 Postman을 사용하여 `connect`의 connectors API를 호출하여 새로운 정보를 등록할 것이다. + +Postman에 전달하는 데이터 중에서 `table.whitelist`라는 항목에 우리가 원하는 테이블의 이름을 지정한다. +`connect`는 평소에 대기하고 있다가 우리가 지정한 테이블에 새로운 데이터가 입력되면 새로 입력된 데이터를 가져오는 역할을 한다. +`topic.prefix`는 `table.whitelist`에서 지정한 테이블에 새로운 데이터가 들어왔을 때 데이터를 전달하기 위해 사용되는 토픽의 Prefix이며 데이터가 전달되는 토픽의 이름은 우리가 지정한 테이블의 이름과 합쳐져서 `roy_topic_users`가 될 것이다. +결국 `Source Connect`는 우리가 지정한 테이블의 변경을 감지하고 있다가 변경이 감지되면 우리가 지정한 토픽으로 데이터를 전달해주는 역할을 한다. + +1. POST /connectors API 호출 + +아래의 JSON 파일을 HTTP Body에 넣어서 `connect`의 /connectors API를 호출한다. + +```json +{ + "name": "my-source-connect", + "config": { + "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", + "connection.url": "jdbc:mysql://localhost:3306/mydb", + "connection.user": "root", + "connection.password": "root", + "mode": "incrementing", + "incrementing.column.name": "id", + "table.whitelist": "users", + "topic.prefix": "roy_topic_", + "tasks.max" : "1" + } +} +``` + +아래와 같이 201 코드가 반환된다면 정상적으로 `Source Connect`가 등록된 것이다. +혹시라도 500 오류가 발생한다면 이전 장에서 설정을 완료하고 나서 Connect를 재실행시키지 않아서 발생하는 오류일 수 있으므로 Connect를 재실행시켜본다. + +![](sink_source_image/post-connectors-api.png) + +**참고** + +GET http://localhost:8083/connectors 를 호출하면 등록되어 있는 connect 목록을 확인할 수 있다. + +![](sink_source_image/get-connectors-api.png) + +GET http://localhost:8083/connectors/my-source-connect 를 호출하면 connect의 상세정보를 확인할 수 있다. + +![](sink_source_image/get-connectors-detail-api.png) + +2. Consumer 등록 + +`Source Connect`가 produce하는 데이터를 확인하는 Consumer를 실행시킨다. + +```bash +$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic roy_topic_users --from-beginning +``` + +![](sink_source_image/register-consumer.png) + +3. 사용자 정보 Insert + +새로운 사용자 정보가 Insert 되었을 때 우리가 예상한 데이터가 전달되는지 확인하기 위해 MariaDB 콘솔로 접속하여 새로운 사용자를 등록해본다. + +```sql +INSERT INTO users (user_id, password, name) VALUES ('roy', 'password', 'roy choi'); +``` + +Consumer가 정상적으로 데이터를 수신하고 있다. +좌측 창이 Consumer이며 우측 창에서 데이터를 INSERT하고 있다. + +![](sink_source_image/insert-user-receiver-source-connect.png) + + +**참고** + +1, 2, 3 단계를 진행하여도 Consumer가 데이터를 수신하지 못하는 경우가 발생하였다. +필자의 경우 아래와 같은 방법으로 해결하였으니 혹시라도 같은 문제가 발생한다면 참고하도록 한다. + +문제상황: Consumer에서 데이터를 수신하지 못하며 connect 로그에서 SQL Exception이 발생하는 상황. +해결방법: +- 아래의 이미지와 같이 mariadb-connector.jar 파일을 추가한 디렉토리에 myslq-connector.jar파일을 추가한다. (mysql.jar 파일은 [여기](https://dev.mysql.com/downloads/file/?id=510648)에서 다운로드 한다.) + +![](sink_source_image/add-mysql-connector.png) + +- connect의 API를 호출하여 `Source Connect`를 추가할 때 접속주소를 mariadb가 아닌 mysql로 수정한다. + +```json +{ + "name": "roy-source-connect", + "config": { + "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", + "connection.url": "jdbc:mysql://localhost:3306/mydb", + "connection.user": "root", + "connection.password": "root", + "mode": "incrementing", + "incrementing.column.name": "id", + "table.whitelist": "users", + "topic.prefix": "roy_topic_", + "tasks.max" : "1" + } +} +``` + +아마 문제가 발생하였다면 같은 방식으로 해결될 것으로 예상된다. + +--- + + + + + + +--- + +**참고한 강의:** + +- https://www.inflearn.com/course/%EC%8A%A4%ED%94%84%EB%A7%81-%ED%81%B4%EB%9D%BC%EC%9A%B0%EB%93%9C-%EB%A7%88%EC%9D%B4%ED%81%AC%EB%A1%9C%EC%84%9C%EB%B9%84%EC%8A%A4 \ No newline at end of file diff --git a/document/kafka/sink_source_image/add-mysql-connector.png b/document/kafka/sink_source_image/add-mysql-connector.png new file mode 100644 index 0000000..ccbbd60 Binary files /dev/null and b/document/kafka/sink_source_image/add-mysql-connector.png differ diff --git a/document/kafka/sink_source_image/connector-flow.png b/document/kafka/sink_source_image/connector-flow.png new file mode 100644 index 0000000..b47c972 Binary files /dev/null and b/document/kafka/sink_source_image/connector-flow.png differ diff --git a/document/kafka/sink_source_image/get-connectors-api.png b/document/kafka/sink_source_image/get-connectors-api.png new file mode 100644 index 0000000..f4309d3 Binary files /dev/null and b/document/kafka/sink_source_image/get-connectors-api.png differ diff --git a/document/kafka/sink_source_image/get-connectors-detail-api.png b/document/kafka/sink_source_image/get-connectors-detail-api.png new file mode 100644 index 0000000..7a03133 Binary files /dev/null and b/document/kafka/sink_source_image/get-connectors-detail-api.png differ diff --git a/document/kafka/sink_source_image/insert-user-receiver-source-connect.png b/document/kafka/sink_source_image/insert-user-receiver-source-connect.png new file mode 100644 index 0000000..6fcab7f Binary files /dev/null and b/document/kafka/sink_source_image/insert-user-receiver-source-connect.png differ diff --git a/document/kafka/sink_source_image/post-connectors-api.png b/document/kafka/sink_source_image/post-connectors-api.png new file mode 100644 index 0000000..c627c9a Binary files /dev/null and b/document/kafka/sink_source_image/post-connectors-api.png differ diff --git a/document/kafka/sink_source_image/register-consumer.png b/document/kafka/sink_source_image/register-consumer.png new file mode 100644 index 0000000..1c36ba6 Binary files /dev/null and b/document/kafka/sink_source_image/register-consumer.png differ diff --git a/thirdparty/kafka_2.13-3.1.0.tgz b/thirdparty/kafka_2.13-3.1.0.tgz deleted file mode 100644 index f02a84c..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0.tgz and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/LICENSE b/thirdparty/kafka_2.13-3.1.0/LICENSE deleted file mode 100644 index 42a8d79..0000000 --- a/thirdparty/kafka_2.13-3.1.0/LICENSE +++ /dev/null @@ -1,321 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - -------------------------------------------------------------------------------- -This project bundles some components that are also licensed under the Apache -License Version 2.0: - -audience-annotations-0.5.0 -commons-cli-1.4 -commons-lang3-3.8.1 -jackson-annotations-2.12.3 -jackson-core-2.12.3 -jackson-databind-2.12.3 -jackson-dataformat-csv-2.12.3 -jackson-datatype-jdk8-2.12.3 -jackson-jaxrs-base-2.12.3 -jackson-jaxrs-json-provider-2.12.3 -jackson-module-jaxb-annotations-2.12.3 -jackson-module-paranamer-2.10.5 -jackson-module-scala_2.13-2.12.3 -jakarta.validation-api-2.0.2 -javassist-3.27.0-GA -jetty-client-9.4.43.v20210629 -jetty-continuation-9.4.43.v20210629 -jetty-http-9.4.43.v20210629 -jetty-io-9.4.43.v20210629 -jetty-security-9.4.43.v20210629 -jetty-server-9.4.43.v20210629 -jetty-servlet-9.4.43.v20210629 -jetty-servlets-9.4.43.v20210629 -jetty-util-9.4.43.v20210629 -jetty-util-ajax-9.4.43.v20210629 -jersey-common-2.34 -jersey-server-2.34 -jose4j-0.7.8 -log4j-1.2.17 -lz4-java-1.8.0 -maven-artifact-3.8.1 -metrics-core-4.1.12.1 -netty-buffer-4.1.68.Final -netty-codec-4.1.68.Final -netty-common-4.1.68.Final -netty-handler-4.1.68.Final -netty-resolver-4.1.68.Final -netty-transport-4.1.68.Final -netty-transport-native-epoll-4.1.68.Final -netty-transport-native-unix-common-4.1.68.Final -plexus-utils-3.2.1 -rocksdbjni-6.22.1.1 -scala-collection-compat_2.13-2.4.4 -scala-library-2.13.6 -scala-logging_2.13-3.9.3 -scala-reflect-2.13.6 -scala-java8-compat_2.13-1.0.0 -snappy-java-1.1.8.4 -zookeeper-3.6.3 -zookeeper-jute-3.6.3 - -=============================================================================== -This product bundles various third-party components under other open source -licenses. This section summarizes those components and their licenses. -See licenses/ for text of these licenses. - ---------------------------------------- -Eclipse Distribution License - v 1.0 -see: licenses/eclipse-distribution-license-1.0 - -jakarta.activation-api-1.2.1 -jakarta.xml.bind-api-2.3.2 - ---------------------------------------- -Eclipse Public License - v 2.0 -see: licenses/eclipse-public-license-2.0 - -jakarta.annotation-api-1.3.5 -jakarta.ws.rs-api-2.1.6 -javax.ws.rs-api-2.1.1 -hk2-api-2.6.1 -hk2-locator-2.6.1 -hk2-utils-2.6.1 -osgi-resource-locator-1.0.3 -aopalliance-repackaged-2.6.1 -jakarta.inject-2.6.1 -jersey-container-servlet-2.34 -jersey-container-servlet-core-2.34 -jersey-client-2.34 -jersey-hk2-2.34 -jersey-media-jaxb-2.31 - ---------------------------------------- -CDDL 1.1 + GPLv2 with classpath exception -see: licenses/CDDL+GPL-1.1 - -javax.servlet-api-3.1.0 -jaxb-api-2.3.0 -activation-1.1.1 - ---------------------------------------- -MIT License - -argparse4j-0.7.0, see: licenses/argparse-MIT -jopt-simple-5.0.4, see: licenses/jopt-simple-MIT -slf4j-api-1.7.30, see: licenses/slf4j-MIT -slf4j-log4j12-1.7.30, see: licenses/slf4j-MIT - ---------------------------------------- -BSD 2-Clause - -zstd-jni-1.5.0-4 see: licenses/zstd-jni-BSD-2-clause - ---------------------------------------- -BSD 3-Clause - -jline-3.12.1, see: licenses/jline-BSD-3-clause -paranamer-2.8, see: licenses/paranamer-BSD-3-clause - ---------------------------------------- -Do What The F*ck You Want To Public License -see: licenses/DWTFYWTPL - -reflections-0.9.12 diff --git a/thirdparty/kafka_2.13-3.1.0/NOTICE b/thirdparty/kafka_2.13-3.1.0/NOTICE deleted file mode 100644 index a50c86d..0000000 --- a/thirdparty/kafka_2.13-3.1.0/NOTICE +++ /dev/null @@ -1,856 +0,0 @@ -Apache Kafka -Copyright 2021 The Apache Software Foundation. - -This product includes software developed at -The Apache Software Foundation (https://www.apache.org/). - -This distribution has a binary dependency on jersey, which is available under the CDDL -License. The source code of jersey can be found at https://github.com/jersey/jersey/. - -This distribution has a binary test dependency on jqwik, which is available under -the Eclipse Public License 2.0. The source code can be found at -https://github.com/jlink/jqwik. - -The streams-scala (streams/streams-scala) module was donated by Lightbend and the original code was copyrighted by them: -Copyright (C) 2018 Lightbend Inc. -Copyright (C) 2017-2018 Alexis Seigneurin. - -This project contains the following code copied from Apache Hadoop: -clients/src/main/java/org/apache/kafka/common/utils/PureJavaCrc32C.java -Some portions of this file Copyright (c) 2004-2006 Intel Corporation and licensed under the BSD license. - -This project contains the following code copied from Apache Hive: -streams/src/main/java/org/apache/kafka/streams/state/internals/Murmur3.java - -// ------------------------------------------------------------------ -// NOTICE file corresponding to the section 4d of The Apache License, -// Version 2.0, in this case for -// ------------------------------------------------------------------ - -# Notices for Eclipse GlassFish - -This content is produced and maintained by the Eclipse GlassFish project. - -* Project home: https://projects.eclipse.org/projects/ee4j.glassfish - -## Trademarks - -Eclipse GlassFish, and GlassFish are trademarks of the Eclipse Foundation. - -## Copyright - -All content is the property of the respective authors or their employers. For -more information regarding authorship of content, please consult the listed -source code repository logs. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Public License v. 2.0 which is available at -http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made -available under the following Secondary Licenses when the conditions for such -availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU -General Public License, version 2 with the GNU Classpath Exception which is -available at https://www.gnu.org/software/classpath/license.html. - -SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 - -## Source Code - -The project maintains the following source code repositories: - -* https://github.com/eclipse-ee4j/glassfish-ha-api -* https://github.com/eclipse-ee4j/glassfish-logging-annotation-processor -* https://github.com/eclipse-ee4j/glassfish-shoal -* https://github.com/eclipse-ee4j/glassfish-cdi-porting-tck -* https://github.com/eclipse-ee4j/glassfish-jsftemplating -* https://github.com/eclipse-ee4j/glassfish-hk2-extra -* https://github.com/eclipse-ee4j/glassfish-hk2 -* https://github.com/eclipse-ee4j/glassfish-fighterfish - -## Third-party Content - -This project leverages the following third party content. - -None - -## Cryptography - -Content may contain encryption software. The country in which you are currently -may have restrictions on the import, possession, and use, and/or re-export to -another country, of encryption software. BEFORE using any encryption software, -please check the country's laws, regulations and policies concerning the import, -possession, or use, and re-export of encryption software, to see if this is -permitted. - - -Apache Yetus - Audience Annotations -Copyright 2015-2017 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -Apache Commons CLI -Copyright 2001-2017 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -Apache Commons Lang -Copyright 2001-2018 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -# Jackson JSON processor - -Jackson is a high-performance, Free/Open Source JSON processing library. -It was originally written by Tatu Saloranta (tatu.saloranta@iki.fi), and has -been in development since 2007. -It is currently developed by a community of developers, as well as supported -commercially by FasterXML.com. - -## Licensing - -Jackson core and extension components may licensed under different licenses. -To find the details that apply to this artifact see the accompanying LICENSE file. -For more information, including possible other licensing options, contact -FasterXML.com (http://fasterxml.com). - -## Credits - -A list of contributors may be found from CREDITS file, which is included -in some artifacts (usually source distributions); but is always available -from the source code management (SCM) system project uses. - - -# Notices for Eclipse Project for JAF - -This content is produced and maintained by the Eclipse Project for JAF project. - -* Project home: https://projects.eclipse.org/projects/ee4j.jaf - -## Copyright - -All content is the property of the respective authors or their employers. For -more information regarding authorship of content, please consult the listed -source code repository logs. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Distribution License v. 1.0, -which is available at http://www.eclipse.org/org/documents/edl-v10.php. - -SPDX-License-Identifier: BSD-3-Clause - -## Source Code - -The project maintains the following source code repositories: - -* https://github.com/eclipse-ee4j/jaf - -## Third-party Content - -This project leverages the following third party content. - -JUnit (4.12) - -* License: Eclipse Public License - - -# Notices for Jakarta Annotations - -This content is produced and maintained by the Jakarta Annotations project. - - * Project home: https://projects.eclipse.org/projects/ee4j.ca - -## Trademarks - -Jakarta Annotations is a trademark of the Eclipse Foundation. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Public License v. 2.0 which is available at -http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made -available under the following Secondary Licenses when the conditions for such -availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU -General Public License, version 2 with the GNU Classpath Exception which is -available at https://www.gnu.org/software/classpath/license.html. - -SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 - -## Source Code - -The project maintains the following source code repositories: - - * https://github.com/eclipse-ee4j/common-annotations-api - -## Third-party Content - -## Cryptography - -Content may contain encryption software. The country in which you are currently -may have restrictions on the import, possession, and use, and/or re-export to -another country, of encryption software. BEFORE using any encryption software, -please check the country's laws, regulations and policies concerning the import, -possession, or use, and re-export of encryption software, to see if this is -permitted. - - -# Notices for the Jakarta RESTful Web Services Project - -This content is produced and maintained by the **Jakarta RESTful Web Services** -project. - -* Project home: https://projects.eclipse.org/projects/ee4j.jaxrs - -## Trademarks - -**Jakarta RESTful Web Services** is a trademark of the Eclipse Foundation. - -## Copyright - -All content is the property of the respective authors or their employers. For -more information regarding authorship of content, please consult the listed -source code repository logs. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Public License v. 2.0 which is available at -http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made -available under the following Secondary Licenses when the conditions for such -availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU -General Public License, version 2 with the GNU Classpath Exception which is -available at https://www.gnu.org/software/classpath/license.html. - -SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 - -## Source Code - -The project maintains the following source code repositories: - -* https://github.com/eclipse-ee4j/jaxrs-api - -## Third-party Content - -This project leverages the following third party content. - -javaee-api (7.0) - -* License: Apache-2.0 AND W3C - -JUnit (4.11) - -* License: Common Public License 1.0 - -Mockito (2.16.0) - -* Project: http://site.mockito.org -* Source: https://github.com/mockito/mockito/releases/tag/v2.16.0 - -## Cryptography - -Content may contain encryption software. The country in which you are currently -may have restrictions on the import, possession, and use, and/or re-export to -another country, of encryption software. BEFORE using any encryption software, -please check the country's laws, regulations and policies concerning the import, -possession, or use, and re-export of encryption software, to see if this is -permitted. - - -# Notices for Eclipse Project for JAXB - -This content is produced and maintained by the Eclipse Project for JAXB project. - -* Project home: https://projects.eclipse.org/projects/ee4j.jaxb - -## Trademarks - -Eclipse Project for JAXB is a trademark of the Eclipse Foundation. - -## Copyright - -All content is the property of the respective authors or their employers. For -more information regarding authorship of content, please consult the listed -source code repository logs. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Distribution License v. 1.0 which is available -at http://www.eclipse.org/org/documents/edl-v10.php. - -SPDX-License-Identifier: BSD-3-Clause - -## Source Code - -The project maintains the following source code repositories: - -* https://github.com/eclipse-ee4j/jaxb-api - -## Third-party Content - -This project leverages the following third party content. - -None - -## Cryptography - -Content may contain encryption software. The country in which you are currently -may have restrictions on the import, possession, and use, and/or re-export to -another country, of encryption software. BEFORE using any encryption software, -please check the country's laws, regulations and policies concerning the import, -possession, or use, and re-export of encryption software, to see if this is -permitted. - - -# Notice for Jersey -This content is produced and maintained by the Eclipse Jersey project. - -* Project home: https://projects.eclipse.org/projects/ee4j.jersey - -## Trademarks -Eclipse Jersey is a trademark of the Eclipse Foundation. - -## Copyright - -All content is the property of the respective authors or their employers. For -more information regarding authorship of content, please consult the listed -source code repository logs. - -## Declared Project Licenses - -This program and the accompanying materials are made available under the terms -of the Eclipse Public License v. 2.0 which is available at -http://www.eclipse.org/legal/epl-2.0. This Source Code may also be made -available under the following Secondary Licenses when the conditions for such -availability set forth in the Eclipse Public License v. 2.0 are satisfied: GNU -General Public License, version 2 with the GNU Classpath Exception which is -available at https://www.gnu.org/software/classpath/license.html. - -SPDX-License-Identifier: EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0 - -## Source Code -The project maintains the following source code repositories: - -* https://github.com/eclipse-ee4j/jersey - -## Third-party Content - -Angular JS, v1.6.6 -* License MIT (http://www.opensource.org/licenses/mit-license.php) -* Project: http://angularjs.org -* Coyright: (c) 2010-2017 Google, Inc. - -aopalliance Version 1 -* License: all the source code provided by AOP Alliance is Public Domain. -* Project: http://aopalliance.sourceforge.net -* Copyright: Material in the public domain is not protected by copyright - -Bean Validation API 2.0.2 -* License: Apache License, 2.0 -* Project: http://beanvalidation.org/1.1/ -* Copyright: 2009, Red Hat, Inc. and/or its affiliates, and individual contributors -* by the @authors tag. - -Hibernate Validator CDI, 6.1.2.Final -* License: Apache License, 2.0 -* Project: https://beanvalidation.org/ -* Repackaged in org.glassfish.jersey.server.validation.internal.hibernate - -Bootstrap v3.3.7 -* License: MIT license (https://github.com/twbs/bootstrap/blob/master/LICENSE) -* Project: http://getbootstrap.com -* Copyright: 2011-2016 Twitter, Inc - -Google Guava Version 18.0 -* License: Apache License, 2.0 -* Copyright (C) 2009 The Guava Authors - -javax.inject Version: 1 -* License: Apache License, 2.0 -* Copyright (C) 2009 The JSR-330 Expert Group - -Javassist Version 3.25.0-GA -* License: Apache License, 2.0 -* Project: http://www.javassist.org/ -* Copyright (C) 1999- Shigeru Chiba. All Rights Reserved. - -Jackson JAX-RS Providers Version 2.10.1 -* License: Apache License, 2.0 -* Project: https://github.com/FasterXML/jackson-jaxrs-providers -* Copyright: (c) 2009-2011 FasterXML, LLC. All rights reserved unless otherwise indicated. - -jQuery v1.12.4 -* License: jquery.org/license -* Project: jquery.org -* Copyright: (c) jQuery Foundation - -jQuery Barcode plugin 0.3 -* License: MIT & GPL (http://www.opensource.org/licenses/mit-license.php & http://www.gnu.org/licenses/gpl.html) -* Project: http://www.pasella.it/projects/jQuery/barcode -* Copyright: (c) 2009 Antonello Pasella antonello.pasella@gmail.com - -JSR-166 Extension - JEP 266 -* License: CC0 -* No copyright -* Written by Doug Lea with assistance from members of JCP JSR-166 Expert Group and released to the public domain, as explained at http://creativecommons.org/publicdomain/zero/1.0/ - -KineticJS, v4.7.1 -* License: MIT license (http://www.opensource.org/licenses/mit-license.php) -* Project: http://www.kineticjs.com, https://github.com/ericdrowell/KineticJS -* Copyright: Eric Rowell - -org.objectweb.asm Version 8.0 -* License: Modified BSD (http://asm.objectweb.org/license.html) -* Copyright (c) 2000-2011 INRIA, France Telecom. All rights reserved. - -org.osgi.core version 6.0.0 -* License: Apache License, 2.0 -* Copyright (c) OSGi Alliance (2005, 2008). All Rights Reserved. - -org.glassfish.jersey.server.internal.monitoring.core -* License: Apache License, 2.0 -* Copyright (c) 2015-2018 Oracle and/or its affiliates. All rights reserved. -* Copyright 2010-2013 Coda Hale and Yammer, Inc. - -W3.org documents -* License: W3C License -* Copyright: Copyright (c) 1994-2001 World Wide Web Consortium, (Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University). All Rights Reserved. http://www.w3.org/Consortium/Legal/ - - -============================================================== - Jetty Web Container - Copyright 1995-2018 Mort Bay Consulting Pty Ltd. -============================================================== - -The Jetty Web Container is Copyright Mort Bay Consulting Pty Ltd -unless otherwise noted. - -Jetty is dual licensed under both - - * The Apache 2.0 License - http://www.apache.org/licenses/LICENSE-2.0.html - - and - - * The Eclipse Public 1.0 License - http://www.eclipse.org/legal/epl-v10.html - -Jetty may be distributed under either license. - ------- -Eclipse - -The following artifacts are EPL. - * org.eclipse.jetty.orbit:org.eclipse.jdt.core - -The following artifacts are EPL and ASL2. - * org.eclipse.jetty.orbit:javax.security.auth.message - - -The following artifacts are EPL and CDDL 1.0. - * org.eclipse.jetty.orbit:javax.mail.glassfish - - ------- -Oracle - -The following artifacts are CDDL + GPLv2 with classpath exception. -https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html - - * javax.servlet:javax.servlet-api - * javax.annotation:javax.annotation-api - * javax.transaction:javax.transaction-api - * javax.websocket:javax.websocket-api - ------- -Oracle OpenJDK - -If ALPN is used to negotiate HTTP/2 connections, then the following -artifacts may be included in the distribution or downloaded when ALPN -module is selected. - - * java.sun.security.ssl - -These artifacts replace/modify OpenJDK classes. The modififications -are hosted at github and both modified and original are under GPL v2 with -classpath exceptions. -http://openjdk.java.net/legal/gplv2+ce.html - - ------- -OW2 - -The following artifacts are licensed by the OW2 Foundation according to the -terms of http://asm.ow2.org/license.html - -org.ow2.asm:asm-commons -org.ow2.asm:asm - - ------- -Apache - -The following artifacts are ASL2 licensed. - -org.apache.taglibs:taglibs-standard-spec -org.apache.taglibs:taglibs-standard-impl - - ------- -MortBay - -The following artifacts are ASL2 licensed. Based on selected classes from -following Apache Tomcat jars, all ASL2 licensed. - -org.mortbay.jasper:apache-jsp - org.apache.tomcat:tomcat-jasper - org.apache.tomcat:tomcat-juli - org.apache.tomcat:tomcat-jsp-api - org.apache.tomcat:tomcat-el-api - org.apache.tomcat:tomcat-jasper-el - org.apache.tomcat:tomcat-api - org.apache.tomcat:tomcat-util-scan - org.apache.tomcat:tomcat-util - -org.mortbay.jasper:apache-el - org.apache.tomcat:tomcat-jasper-el - org.apache.tomcat:tomcat-el-api - - ------- -Mortbay - -The following artifacts are CDDL + GPLv2 with classpath exception. - -https://glassfish.dev.java.net/nonav/public/CDDL+GPL.html - -org.eclipse.jetty.toolchain:jetty-schemas - ------- -Assorted - -The UnixCrypt.java code implements the one way cryptography used by -Unix systems for simple password protection. Copyright 1996 Aki Yoshida, -modified April 2001 by Iris Van den Broeke, Daniel Deville. -Permission to use, copy, modify and distribute UnixCrypt -for non-commercial or commercial purposes and without fee is -granted provided that the copyright notice appears in all copies. - - -Apache log4j -Copyright 2007 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -Maven Artifact -Copyright 2001-2019 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -This product includes software developed by the Indiana University - Extreme! Lab (http://www.extreme.indiana.edu/). - -This product includes software developed by -The Apache Software Foundation (http://www.apache.org/). - -This product includes software developed by -ThoughtWorks (http://www.thoughtworks.com). - -This product includes software developed by -javolution (http://javolution.org/). - -This product includes software developed by -Rome (https://rome.dev.java.net/). - - -Scala -Copyright (c) 2002-2020 EPFL -Copyright (c) 2011-2020 Lightbend, Inc. - -Scala includes software developed at -LAMP/EPFL (https://lamp.epfl.ch/) and -Lightbend, Inc. (https://www.lightbend.com/). - -Licensed under the Apache License, Version 2.0 (the "License"). -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -This software includes projects with other licenses -- see `doc/LICENSE.md`. - - -Apache ZooKeeper - Server -Copyright 2008-2021 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -Apache ZooKeeper - Jute -Copyright 2008-2021 The Apache Software Foundation - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/). - - -The Netty Project - ================= - -Please visit the Netty web site for more information: - - * https://netty.io/ - -Copyright 2014 The Netty Project - -The Netty Project licenses this file to you under the Apache License, -version 2.0 (the "License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at: - - https://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -License for the specific language governing permissions and limitations -under the License. - -Also, please refer to each LICENSE..txt file, which is located in -the 'license' directory of the distribution file, for the license terms of the -components that this product depends on. - -------------------------------------------------------------------------------- -This product contains the extensions to Java Collections Framework which has -been derived from the works by JSR-166 EG, Doug Lea, and Jason T. Greene: - - * LICENSE: - * license/LICENSE.jsr166y.txt (Public Domain) - * HOMEPAGE: - * http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/ - * http://viewvc.jboss.org/cgi-bin/viewvc.cgi/jbosscache/experimental/jsr166/ - -This product contains a modified version of Robert Harder's Public Domain -Base64 Encoder and Decoder, which can be obtained at: - - * LICENSE: - * license/LICENSE.base64.txt (Public Domain) - * HOMEPAGE: - * http://iharder.sourceforge.net/current/java/base64/ - -This product contains a modified portion of 'Webbit', an event based -WebSocket and HTTP server, which can be obtained at: - - * LICENSE: - * license/LICENSE.webbit.txt (BSD License) - * HOMEPAGE: - * https://github.com/joewalnes/webbit - -This product contains a modified portion of 'SLF4J', a simple logging -facade for Java, which can be obtained at: - - * LICENSE: - * license/LICENSE.slf4j.txt (MIT License) - * HOMEPAGE: - * https://www.slf4j.org/ - -This product contains a modified portion of 'Apache Harmony', an open source -Java SE, which can be obtained at: - - * NOTICE: - * license/NOTICE.harmony.txt - * LICENSE: - * license/LICENSE.harmony.txt (Apache License 2.0) - * HOMEPAGE: - * https://archive.apache.org/dist/harmony/ - -This product contains a modified portion of 'jbzip2', a Java bzip2 compression -and decompression library written by Matthew J. Francis. It can be obtained at: - - * LICENSE: - * license/LICENSE.jbzip2.txt (MIT License) - * HOMEPAGE: - * https://code.google.com/p/jbzip2/ - -This product contains a modified portion of 'libdivsufsort', a C API library to construct -the suffix array and the Burrows-Wheeler transformed string for any input string of -a constant-size alphabet written by Yuta Mori. It can be obtained at: - - * LICENSE: - * license/LICENSE.libdivsufsort.txt (MIT License) - * HOMEPAGE: - * https://github.com/y-256/libdivsufsort - -This product contains a modified portion of Nitsan Wakart's 'JCTools', Java Concurrency Tools for the JVM, - which can be obtained at: - - * LICENSE: - * license/LICENSE.jctools.txt (ASL2 License) - * HOMEPAGE: - * https://github.com/JCTools/JCTools - -This product optionally depends on 'JZlib', a re-implementation of zlib in -pure Java, which can be obtained at: - - * LICENSE: - * license/LICENSE.jzlib.txt (BSD style License) - * HOMEPAGE: - * http://www.jcraft.com/jzlib/ - -This product optionally depends on 'Compress-LZF', a Java library for encoding and -decoding data in LZF format, written by Tatu Saloranta. It can be obtained at: - - * LICENSE: - * license/LICENSE.compress-lzf.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/ning/compress - -This product optionally depends on 'lz4', a LZ4 Java compression -and decompression library written by Adrien Grand. It can be obtained at: - - * LICENSE: - * license/LICENSE.lz4.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/jpountz/lz4-java - -This product optionally depends on 'lzma-java', a LZMA Java compression -and decompression library, which can be obtained at: - - * LICENSE: - * license/LICENSE.lzma-java.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/jponge/lzma-java - -This product contains a modified portion of 'jfastlz', a Java port of FastLZ compression -and decompression library written by William Kinney. It can be obtained at: - - * LICENSE: - * license/LICENSE.jfastlz.txt (MIT License) - * HOMEPAGE: - * https://code.google.com/p/jfastlz/ - -This product contains a modified portion of and optionally depends on 'Protocol Buffers', Google's data -interchange format, which can be obtained at: - - * LICENSE: - * license/LICENSE.protobuf.txt (New BSD License) - * HOMEPAGE: - * https://github.com/google/protobuf - -This product optionally depends on 'Bouncy Castle Crypto APIs' to generate -a temporary self-signed X.509 certificate when the JVM does not provide the -equivalent functionality. It can be obtained at: - - * LICENSE: - * license/LICENSE.bouncycastle.txt (MIT License) - * HOMEPAGE: - * https://www.bouncycastle.org/ - -This product optionally depends on 'Snappy', a compression library produced -by Google Inc, which can be obtained at: - - * LICENSE: - * license/LICENSE.snappy.txt (New BSD License) - * HOMEPAGE: - * https://github.com/google/snappy - -This product optionally depends on 'JBoss Marshalling', an alternative Java -serialization API, which can be obtained at: - - * LICENSE: - * license/LICENSE.jboss-marshalling.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/jboss-remoting/jboss-marshalling - -This product optionally depends on 'Caliper', Google's micro- -benchmarking framework, which can be obtained at: - - * LICENSE: - * license/LICENSE.caliper.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/google/caliper - -This product optionally depends on 'Apache Commons Logging', a logging -framework, which can be obtained at: - - * LICENSE: - * license/LICENSE.commons-logging.txt (Apache License 2.0) - * HOMEPAGE: - * https://commons.apache.org/logging/ - -This product optionally depends on 'Apache Log4J', a logging framework, which -can be obtained at: - - * LICENSE: - * license/LICENSE.log4j.txt (Apache License 2.0) - * HOMEPAGE: - * https://logging.apache.org/log4j/ - -This product optionally depends on 'Aalto XML', an ultra-high performance -non-blocking XML processor, which can be obtained at: - - * LICENSE: - * license/LICENSE.aalto-xml.txt (Apache License 2.0) - * HOMEPAGE: - * http://wiki.fasterxml.com/AaltoHome - -This product contains a modified version of 'HPACK', a Java implementation of -the HTTP/2 HPACK algorithm written by Twitter. It can be obtained at: - - * LICENSE: - * license/LICENSE.hpack.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/twitter/hpack - -This product contains a modified version of 'HPACK', a Java implementation of -the HTTP/2 HPACK algorithm written by Cory Benfield. It can be obtained at: - - * LICENSE: - * license/LICENSE.hyper-hpack.txt (MIT License) - * HOMEPAGE: - * https://github.com/python-hyper/hpack/ - -This product contains a modified version of 'HPACK', a Java implementation of -the HTTP/2 HPACK algorithm written by Tatsuhiro Tsujikawa. It can be obtained at: - - * LICENSE: - * license/LICENSE.nghttp2-hpack.txt (MIT License) - * HOMEPAGE: - * https://github.com/nghttp2/nghttp2/ - -This product contains a modified portion of 'Apache Commons Lang', a Java library -provides utilities for the java.lang API, which can be obtained at: - - * LICENSE: - * license/LICENSE.commons-lang.txt (Apache License 2.0) - * HOMEPAGE: - * https://commons.apache.org/proper/commons-lang/ - - -This product contains the Maven wrapper scripts from 'Maven Wrapper', that provides an easy way to ensure a user has everything necessary to run the Maven build. - - * LICENSE: - * license/LICENSE.mvn-wrapper.txt (Apache License 2.0) - * HOMEPAGE: - * https://github.com/takari/maven-wrapper - -This product contains the dnsinfo.h header file, that provides a way to retrieve the system DNS configuration on MacOS. -This private header is also used by Apple's open source - mDNSResponder (https://opensource.apple.com/tarballs/mDNSResponder/). - - * LICENSE: - * license/LICENSE.dnsinfo.txt (Apple Public Source License 2.0) - * HOMEPAGE: - * https://www.opensource.apple.com/source/configd/configd-453.19/dnsinfo/dnsinfo.h \ No newline at end of file diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-console-sink.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-console-sink.properties deleted file mode 100644 index e240a8f..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-console-sink.properties +++ /dev/null @@ -1,19 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -name=local-console-sink -connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector -tasks.max=1 -topics=connect-test \ No newline at end of file diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-console-source.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-console-source.properties deleted file mode 100644 index d0e2069..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-console-source.properties +++ /dev/null @@ -1,19 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -name=local-console-source -connector.class=org.apache.kafka.connect.file.FileStreamSourceConnector -tasks.max=1 -topic=connect-test \ No newline at end of file diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-distributed.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-distributed.properties deleted file mode 100644 index cedad9a..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-distributed.properties +++ /dev/null @@ -1,89 +0,0 @@ -## -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -## - -# This file contains some of the configurations for the Kafka Connect distributed worker. This file is intended -# to be used with the examples, and some settings may differ from those used in a production system, especially -# the `bootstrap.servers` and those specifying replication factors. - -# A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. -bootstrap.servers=localhost:9092 - -# unique name for the cluster, used in forming the Connect cluster group. Note that this must not conflict with consumer group IDs -group.id=connect-cluster - -# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will -# need to configure these based on the format they want their data in when loaded from or stored into Kafka -key.converter=org.apache.kafka.connect.json.JsonConverter -value.converter=org.apache.kafka.connect.json.JsonConverter -# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply -# it to -key.converter.schemas.enable=true -value.converter.schemas.enable=true - -# Topic to use for storing offsets. This topic should have many partitions and be replicated and compacted. -# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create -# the topic before starting Kafka Connect if a specific topic configuration is needed. -# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. -# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able -# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. -offset.storage.topic=connect-offsets -offset.storage.replication.factor=1 -#offset.storage.partitions=25 - -# Topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, -# and compacted topic. Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create -# the topic before starting Kafka Connect if a specific topic configuration is needed. -# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. -# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able -# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. -config.storage.topic=connect-configs -config.storage.replication.factor=1 - -# Topic to use for storing statuses. This topic can have multiple partitions and should be replicated and compacted. -# Kafka Connect will attempt to create the topic automatically when needed, but you can always manually create -# the topic before starting Kafka Connect if a specific topic configuration is needed. -# Most users will want to use the built-in default replication factor of 3 or in some cases even specify a larger value. -# Since this means there must be at least as many brokers as the maximum replication factor used, we'd like to be able -# to run this example on a single-broker cluster and so here we instead set the replication factor to 1. -status.storage.topic=connect-status -status.storage.replication.factor=1 -#status.storage.partitions=5 - -# Flush much faster than normal, which is useful for testing/debugging -offset.flush.interval.ms=10000 - -# List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. -# Specify hostname as 0.0.0.0 to bind to all interfaces. -# Leave hostname empty to bind to default interface. -# Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084" -#listeners=HTTP://:8083 - -# The Hostname & Port that will be given out to other workers to connect to i.e. URLs that are routable from other servers. -# If not set, it uses the value for "listeners" if configured. -#rest.advertised.host.name= -#rest.advertised.port= -#rest.advertised.listener= - -# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins -# (connectors, converters, transformations). The list should consist of top level directories that include -# any combination of: -# a) directories immediately containing jars with plugins and their dependencies -# b) uber-jars with plugins and their dependencies -# c) directories immediately containing the package directory structure of classes of plugins and their dependencies -# Examples: -# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, -#plugin.path= diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-file-sink.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-file-sink.properties deleted file mode 100644 index 594ccc6..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-file-sink.properties +++ /dev/null @@ -1,20 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -name=local-file-sink -connector.class=FileStreamSink -tasks.max=1 -file=test.sink.txt -topics=connect-test \ No newline at end of file diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-file-source.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-file-source.properties deleted file mode 100644 index 599cf4c..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-file-source.properties +++ /dev/null @@ -1,20 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -name=local-file-source -connector.class=FileStreamSource -tasks.max=1 -file=test.txt -topic=connect-test \ No newline at end of file diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-log4j.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-log4j.properties deleted file mode 100644 index 157d593..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-log4j.properties +++ /dev/null @@ -1,42 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -log4j.rootLogger=INFO, stdout, connectAppender - -# Send the logs to the console. -# -log4j.appender.stdout=org.apache.log4j.ConsoleAppender -log4j.appender.stdout.layout=org.apache.log4j.PatternLayout - -# Send the logs to a file, rolling the file at midnight local time. For example, the `File` option specifies the -# location of the log files (e.g. ${kafka.logs.dir}/connect.log), and at midnight local time the file is closed -# and copied in the same directory but with a filename that ends in the `DatePattern` option. -# -log4j.appender.connectAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.connectAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.connectAppender.File=${kafka.logs.dir}/connect.log -log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout - -# The `%X{connector.context}` parameter in the layout includes connector-specific and task-specific information -# in the log messages, where appropriate. This makes it easier to identify those log messages that apply to a -# specific connector. -# -connect.log.pattern=[%d] %p %X{connector.context}%m (%c:%L)%n - -log4j.appender.stdout.layout.ConversionPattern=${connect.log.pattern} -log4j.appender.connectAppender.layout.ConversionPattern=${connect.log.pattern} - -log4j.logger.org.apache.zookeeper=ERROR -log4j.logger.org.reflections=ERROR diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-mirror-maker.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-mirror-maker.properties deleted file mode 100644 index 40afda5..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-mirror-maker.properties +++ /dev/null @@ -1,59 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under A or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# see org.apache.kafka.clients.consumer.ConsumerConfig for more details - -# Sample MirrorMaker 2.0 top-level configuration file -# Run with ./bin/connect-mirror-maker.sh connect-mirror-maker.properties - -# specify any number of cluster aliases -clusters = A, B - -# connection information for each cluster -# This is a comma separated host:port pairs for each cluster -# for e.g. "A_host1:9092, A_host2:9092, A_host3:9092" -A.bootstrap.servers = A_host1:9092, A_host2:9092, A_host3:9092 -B.bootstrap.servers = B_host1:9092, B_host2:9092, B_host3:9092 - -# enable and configure individual replication flows -A->B.enabled = true - -# regex which defines which topics gets replicated. For eg "foo-.*" -A->B.topics = .* - -B->A.enabled = true -B->A.topics = .* - -# Setting replication factor of newly created remote topics -replication.factor=1 - -############################# Internal Topic Settings ############################# -# The replication factor for mm2 internal topics "heartbeats", "B.checkpoints.internal" and -# "mm2-offset-syncs.B.internal" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -checkpoints.topic.replication.factor=1 -heartbeats.topic.replication.factor=1 -offset-syncs.topic.replication.factor=1 - -# The replication factor for connect internal topics "mm2-configs.B.internal", "mm2-offsets.B.internal" and -# "mm2-status.B.internal" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offset.storage.replication.factor=1 -status.storage.replication.factor=1 -config.storage.replication.factor=1 - -# customize as needed -# replication.policy.separator = _ -# sync.topic.acls.enabled = false -# emit.heartbeats.interval.seconds = 5 diff --git a/thirdparty/kafka_2.13-3.1.0/config/connect-standalone.properties b/thirdparty/kafka_2.13-3.1.0/config/connect-standalone.properties deleted file mode 100644 index a340a3b..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/connect-standalone.properties +++ /dev/null @@ -1,41 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# These are defaults. This file just demonstrates how to override some settings. -bootstrap.servers=localhost:9092 - -# The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will -# need to configure these based on the format they want their data in when loaded from or stored into Kafka -key.converter=org.apache.kafka.connect.json.JsonConverter -value.converter=org.apache.kafka.connect.json.JsonConverter -# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter we want to apply -# it to -key.converter.schemas.enable=true -value.converter.schemas.enable=true - -offset.storage.file.filename=/tmp/connect.offsets -# Flush much faster than normal, which is useful for testing/debugging -offset.flush.interval.ms=10000 - -# Set to a list of filesystem paths separated by commas (,) to enable class loading isolation for plugins -# (connectors, converters, transformations). The list should consist of top level directories that include -# any combination of: -# a) directories immediately containing jars with plugins and their dependencies -# b) uber-jars with plugins and their dependencies -# c) directories immediately containing the package directory structure of classes of plugins and their dependencies -# Note: symlinks will be followed to discover dependencies or plugins. -# Examples: -# plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors, -#plugin.path= diff --git a/thirdparty/kafka_2.13-3.1.0/config/consumer.properties b/thirdparty/kafka_2.13-3.1.0/config/consumer.properties deleted file mode 100644 index 01bb12e..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/consumer.properties +++ /dev/null @@ -1,26 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# see org.apache.kafka.clients.consumer.ConsumerConfig for more details - -# list of brokers used for bootstrapping knowledge about the rest of the cluster -# format: host1:port1,host2:port2 ... -bootstrap.servers=localhost:9092 - -# consumer group id -group.id=test-consumer-group - -# What to do when there is no initial offset in Kafka or if the current -# offset does not exist any more on the server: latest, earliest, none -#auto.offset.reset= diff --git a/thirdparty/kafka_2.13-3.1.0/config/kraft/README.md b/thirdparty/kafka_2.13-3.1.0/config/kraft/README.md deleted file mode 100644 index 80bc8ca..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/kraft/README.md +++ /dev/null @@ -1,173 +0,0 @@ -KRaft (aka KIP-500) mode Preview Release -========================================================= - -# Introduction -It is now possible to run Apache Kafka without Apache ZooKeeper! We call this the [Kafka Raft metadata mode](https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum), typically shortened to `KRaft mode`. -`KRaft` is intended to be pronounced like `craft` (as in `craftsmanship`). It is currently *PREVIEW AND SHOULD NOT BE USED IN PRODUCTION*, but it -is available for testing in the Kafka 3.1 release. - -When the Kafka cluster is in KRaft mode, it does not store its metadata in ZooKeeper. In fact, you do not have to run ZooKeeper at all, because it stores its metadata in a KRaft quorum of controller nodes. - -KRaft mode has many benefits -- some obvious, and some not so obvious. Clearly, it is nice to manage and configure one service rather than two services. In addition, you can now run a single process Kafka cluster. -Most important of all, KRaft mode is more scalable. We expect to be able to [support many more topics and partitions](https://www.confluent.io/kafka-summit-san-francisco-2019/kafka-needs-no-keeper/) in this mode. - -# Quickstart - -## Warning -KRaft mode in Kafka 3.1 is provided for testing only, *NOT* for production. We do not yet support upgrading existing ZooKeeper-based Kafka clusters into this mode. -There may be bugs, including serious ones. You should *assume that your data could be lost at any time* if you try the preview release of KRaft mode. - -## Generate a cluster ID -The first step is to generate an ID for your new cluster, using the kafka-storage tool: - -~~~~ -$ ./bin/kafka-storage.sh random-uuid -xtzWWN4bTjitpL3kfd9s5g -~~~~ - -## Format Storage Directories -The next step is to format your storage directories. If you are running in single-node mode, you can do this with one command: - -~~~~ -$ ./bin/kafka-storage.sh format -t -c ./config/kraft/server.properties -Formatting /tmp/kraft-combined-logs -~~~~ - -If you are using multiple nodes, then you should run the format command on each node. Be sure to use the same cluster ID for each one. - -## Start the Kafka Server -Finally, you are ready to start the Kafka server on each node. - -~~~~ -$ ./bin/kafka-server-start.sh ./config/kraft/server.properties -[2021-02-26 15:37:11,071] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) -[2021-02-26 15:37:11,294] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) -[2021-02-26 15:37:11,466] INFO [Log partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) -[2021-02-26 15:37:11,509] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) -[2021-02-26 15:37:11,640] INFO [RaftManager nodeId=1] Completed transition to Unattached(epoch=0, voters=[1], electionTimeoutMs=9037) (org.apache.kafka.raft.QuorumState) -... -~~~~ - -Just like with a ZooKeeper based broker, you can connect to port 9092 (or whatever port you configured) to perform administrative operations or produce or consume data. - -~~~~ -$ ./bin/kafka-topics.sh --create --topic foo --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092 -Created topic foo. -~~~~ - -# Deployment - -## Controller Servers -In KRaft mode, only a small group of specially selected servers can act as controllers (unlike the ZooKeeper-based mode, where any server can become the -Controller). The specially selected controller servers will participate in the metadata quorum. Each controller server is either active, or a hot -standby for the current active controller server. - -You will typically select 3 or 5 servers for this role, depending on factors like cost and the number of concurrent failures your system should withstand -without availability impact. Just like with ZooKeeper, you must keep a majority of the controllers alive in order to maintain availability. So if you have 3 -controllers, you can tolerate 1 failure; with 5 controllers, you can tolerate 2 failures. - -## Process Roles -Each Kafka server now has a new configuration key called `process.roles` which can have the following values: - -* If `process.roles` is set to `broker`, the server acts as a broker in KRaft mode. -* If `process.roles` is set to `controller`, the server acts as a controller in KRaft mode. -* If `process.roles` is set to `broker,controller`, the server acts as both a broker and a controller in KRaft mode. -* If `process.roles` is not set at all then we are assumed to be in ZooKeeper mode. As mentioned earlier, you can't currently transition back and forth between ZooKeeper mode and KRaft mode without reformatting. - -Nodes that act as both brokers and controllers are referred to as "combined" nodes. Combined nodes are simpler to operate for simple use cases and allow you to avoid -some fixed memory overheads associated with JVMs. The key disadvantage is that the controller will be less isolated from the rest of the system. For example, if activity on the broker causes an out of -memory condition, the controller part of the server is not isolated from that OOM condition. - -## Quorum Voters -All nodes in the system must set the `controller.quorum.voters` configuration. This identifies the quorum controller servers that should be used. All the controllers must be enumerated. -This is similar to how, when using ZooKeeper, the `zookeeper.connect` configuration must contain all the ZooKeeper servers. Unlike with the ZooKeeper config, however, `controller.quorum.voters` -also has IDs for each node. The format is id1@host1:port1,id2@host2:port2, etc. - -So if you have 10 brokers and 3 controllers named controller1, controller2, controller3, you might have the following configuration on controller1: -``` -process.roles=controller -node.id=1 -listeners=CONTROLLER://controller1.example.com:9093 -controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093 -``` - -Each broker and each controller must set `controller.quorum.voters`. Note that the node ID supplied in the `controller.quorum.voters` configuration must match that supplied to the server. -So on controller1, node.id must be set to 1, and so forth. Note that there is no requirement for controller IDs to start at 0 or 1. However, the easiest and least confusing way to allocate -node IDs is probably just to give each server a numeric ID, starting from 0. - -Note that clients never need to configure `controller.quorum.voters`; only servers do. - -## Kafka Storage Tool -As described above in the QuickStart section, you must use the `kafka-storage.sh` tool to generate a cluster ID for your new cluster, and then run the format command on each node before starting the node. - -This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster UUID automatically. One reason for the change -is that auto-formatting can sometimes obscure an error condition. For example, under UNIX, if a data directory can't be mounted, it may show up as blank. In this case, auto-formatting would be the wrong thing to do. - -This is particularly important for the metadata log maintained by the controller servers. If two controllers out of three controllers were able to start with blank logs, a leader might be able to be elected with -nothing in the log, which would cause all metadata to be lost. - -# Missing Features -We don't support any kind of upgrade right now, either to or from KRaft mode. This is an important gap that we are working on. - -Finally, the following Kafka features have not yet been fully implemented: - -* Support for certain security features: configuring a KRaft-based Authorizer, setting up SCRAM, delegation tokens, and so forth - (although note that you can use authorizers such as `kafka.security.authorizer.AclAuthorizer` with KRaft clusters, even - if they are ZooKeeper-based: simply define `authorizer.class.name` and configure the authorizer as you normally would). -* Support for some configurations, like enabling unclean leader election by default or dynamically changing broker endpoints -* Support for KIP-112 "JBOD" modes - -We've tried to make it clear when a feature is not supported in the preview release, but you may encounter some rough edges. We will cover these feature gaps incrementally in the `trunk` branch. - -# Debugging -If you encounter an issue, you might want to take a look at the metadata log. - -## kafka-dump-log -One way to view the metadata log is with kafka-dump-log.sh tool, like so: - -~~~~ -$ ./bin/kafka-dump-log.sh --cluster-metadata-decoder --skip-record-metadata --files /tmp/kraft-combined-logs/__cluster_metadata-0/*.log -Dumping /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log -Starting offset: 0 -baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: true position: 0 CreateTime: 1614382631640 size: 89 magic: 2 compresscodec: NONE crc: 1438115474 isvalid: true - -baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 89 CreateTime: 1614382632329 size: 137 magic: 2 compresscodec: NONE crc: 1095855865 isvalid: true - payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"P3UFsWoNR-erL9PK98YLsA","brokerEpoch":0,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}} -baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 226 CreateTime: 1614382632453 size: 83 magic: 2 compresscodec: NONE crc: 455187130 isvalid: true - payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}} -baseOffset: 3 lastOffset: 3 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 309 CreateTime: 1614382634484 size: 83 magic: 2 compresscodec: NONE crc: 4055692847 isvalid: true - payload: {"type":"FENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}} -baseOffset: 4 lastOffset: 4 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: true position: 392 CreateTime: 1614382671857 size: 89 magic: 2 compresscodec: NONE crc: 1318571838 isvalid: true - -baseOffset: 5 lastOffset: 5 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 481 CreateTime: 1614382672440 size: 137 magic: 2 compresscodec: NONE crc: 841144615 isvalid: true - payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"RXRJu7cnScKRZOnWQGs86g","brokerEpoch":4,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}} -baseOffset: 6 lastOffset: 6 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 618 CreateTime: 1614382672544 size: 83 magic: 2 compresscodec: NONE crc: 4155905922 isvalid: true - payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":4}} -baseOffset: 7 lastOffset: 8 count: 2 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 701 CreateTime: 1614382712158 size: 159 magic: 2 compresscodec: NONE crc: 3726758683 isvalid: true - payload: {"type":"TOPIC_RECORD","version":0,"data":{"name":"foo","topicId":"5zoAlv-xEh9xRANKXt1Lbg"}} - payload: {"type":"PARTITION_RECORD","version":0,"data":{"partitionId":0,"topicId":"5zoAlv-xEh9xRANKXt1Lbg","replicas":[1],"isr":[1],"removingReplicas":null,"addingReplicas":null,"leader":1,"leaderEpoch":0,"partitionEpoch":0}} -~~~~ - -## The Metadata Shell -Another tool for examining the metadata logs is the Kafka metadata shell. Just like the ZooKeeper shell, this allows you to inspect the metadata of the cluster. - -~~~~ -$ ./bin/kafka-metadata-shell.sh --snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log ->> ls / -brokers local metadataQuorum topicIds topics ->> ls /topics -foo ->> cat /topics/foo/0/data -{ - "partitionId" : 0, - "topicId" : "5zoAlv-xEh9xRANKXt1Lbg", - "replicas" : [ 1 ], - "isr" : [ 1 ], - "removingReplicas" : null, - "addingReplicas" : null, - "leader" : 1, - "leaderEpoch" : 0, - "partitionEpoch" : 0 -} ->> exit -~~~~ diff --git a/thirdparty/kafka_2.13-3.1.0/config/kraft/broker.properties b/thirdparty/kafka_2.13-3.1.0/config/kraft/broker.properties deleted file mode 100644 index dfbd6ec..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/kraft/broker.properties +++ /dev/null @@ -1,128 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# -# This configuration file is intended for use in KRaft mode, where -# Apache ZooKeeper is not present. See config/kraft/README.md for details. -# - -############################# Server Basics ############################# - -# The role of this server. Setting this puts us in KRaft mode -process.roles=broker - -# The node id associated with this instance's roles -node.id=2 - -# The connect string for the controller quorum -controller.quorum.voters=1@localhost:9093 - -############################# Socket Server Settings ############################# - -# The address the socket server listens on. It will get the value returned from -# java.net.InetAddress.getCanonicalHostName() if not configured. -# FORMAT: -# listeners = listener_name://host_name:port -# EXAMPLE: -# listeners = PLAINTEXT://your.host.name:9092 -listeners=PLAINTEXT://localhost:9092 -inter.broker.listener.name=PLAINTEXT - -# Hostname and port the broker will advertise to producers and consumers. If not set, -# it uses the value for "listeners" if configured. Otherwise, it will use the value -# returned from java.net.InetAddress.getCanonicalHostName(). -advertised.listeners=PLAINTEXT://localhost:9092 - -# Listener, host name, and port for the controller to advertise to the brokers. If -# this server is a controller, this listener must be configured. -controller.listener.names=CONTROLLER - -# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details -listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - -# The number of threads that the server uses for receiving requests from the network and sending responses to the network -num.network.threads=3 - -# The number of threads that the server uses for processing requests, which may include disk I/O -num.io.threads=8 - -# The send buffer (SO_SNDBUF) used by the socket server -socket.send.buffer.bytes=102400 - -# The receive buffer (SO_RCVBUF) used by the socket server -socket.receive.buffer.bytes=102400 - -# The maximum size of a request that the socket server will accept (protection against OOM) -socket.request.max.bytes=104857600 - - -############################# Log Basics ############################# - -# A comma separated list of directories under which to store log files -log.dirs=/tmp/kraft-broker-logs - -# The default number of log partitions per topic. More partitions allow greater -# parallelism for consumption, but this will also result in more files across -# the brokers. -num.partitions=1 - -# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. -# This value is recommended to be increased for installations with data dirs located in RAID array. -num.recovery.threads.per.data.dir=1 - -############################# Internal Topic Settings ############################# -# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offsets.topic.replication.factor=1 -transaction.state.log.replication.factor=1 -transaction.state.log.min.isr=1 - -############################# Log Flush Policy ############################# - -# Messages are immediately written to the filesystem but by default we only fsync() to sync -# the OS cache lazily. The following configurations control the flush of data to disk. -# There are a few important trade-offs here: -# 1. Durability: Unflushed data may be lost if you are not using replication. -# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. -# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. -# The settings below allow one to configure the flush policy to flush data after a period of time or -# every N messages (or both). This can be done globally and overridden on a per-topic basis. - -# The number of messages to accept before forcing a flush of data to disk -#log.flush.interval.messages=10000 - -# The maximum amount of time a message can sit in a log before we force a flush -#log.flush.interval.ms=1000 - -############################# Log Retention Policy ############################# - -# The following configurations control the disposal of log segments. The policy can -# be set to delete segments after a period of time, or after a given size has accumulated. -# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens -# from the end of the log. - -# The minimum age of a log file to be eligible for deletion due to age -log.retention.hours=168 - -# A size-based retention policy for logs. Segments are pruned from the log unless the remaining -# segments drop below log.retention.bytes. Functions independently of log.retention.hours. -#log.retention.bytes=1073741824 - -# The maximum size of a log segment file. When this size is reached a new log segment will be created. -log.segment.bytes=1073741824 - -# The interval at which log segments are checked to see if they can be deleted according -# to the retention policies -log.retention.check.interval.ms=300000 diff --git a/thirdparty/kafka_2.13-3.1.0/config/kraft/controller.properties b/thirdparty/kafka_2.13-3.1.0/config/kraft/controller.properties deleted file mode 100644 index 54aa7fb..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/kraft/controller.properties +++ /dev/null @@ -1,127 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# -# This configuration file is intended for use in KRaft mode, where -# Apache ZooKeeper is not present. See config/kraft/README.md for details. -# - -############################# Server Basics ############################# - -# The role of this server. Setting this puts us in KRaft mode -process.roles=controller - -# The node id associated with this instance's roles -node.id=1 - -# The connect string for the controller quorum -controller.quorum.voters=1@localhost:9093 - -############################# Socket Server Settings ############################# - -# The address the socket server listens on. It will get the value returned from -# java.net.InetAddress.getCanonicalHostName() if not configured. -# FORMAT: -# listeners = listener_name://host_name:port -# EXAMPLE: -# listeners = PLAINTEXT://your.host.name:9092 -listeners=PLAINTEXT://:9093 - -# Hostname and port the broker will advertise to producers and consumers. If not set, -# it uses the value for "listeners" if configured. Otherwise, it will use the value -# returned from java.net.InetAddress.getCanonicalHostName(). -#advertised.listeners=PLAINTEXT://your.host.name:9092 - -# Listener, host name, and port for the controller to advertise to the brokers. If -# this server is a controller, this listener must be configured. -controller.listener.names=PLAINTEXT - -# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details -#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - -# The number of threads that the server uses for receiving requests from the network and sending responses to the network -num.network.threads=3 - -# The number of threads that the server uses for processing requests, which may include disk I/O -num.io.threads=8 - -# The send buffer (SO_SNDBUF) used by the socket server -socket.send.buffer.bytes=102400 - -# The receive buffer (SO_RCVBUF) used by the socket server -socket.receive.buffer.bytes=102400 - -# The maximum size of a request that the socket server will accept (protection against OOM) -socket.request.max.bytes=104857600 - - -############################# Log Basics ############################# - -# A comma separated list of directories under which to store log files -log.dirs=/tmp/kraft-controller-logs - -# The default number of log partitions per topic. More partitions allow greater -# parallelism for consumption, but this will also result in more files across -# the brokers. -num.partitions=1 - -# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. -# This value is recommended to be increased for installations with data dirs located in RAID array. -num.recovery.threads.per.data.dir=1 - -############################# Internal Topic Settings ############################# -# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offsets.topic.replication.factor=1 -transaction.state.log.replication.factor=1 -transaction.state.log.min.isr=1 - -############################# Log Flush Policy ############################# - -# Messages are immediately written to the filesystem but by default we only fsync() to sync -# the OS cache lazily. The following configurations control the flush of data to disk. -# There are a few important trade-offs here: -# 1. Durability: Unflushed data may be lost if you are not using replication. -# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. -# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. -# The settings below allow one to configure the flush policy to flush data after a period of time or -# every N messages (or both). This can be done globally and overridden on a per-topic basis. - -# The number of messages to accept before forcing a flush of data to disk -#log.flush.interval.messages=10000 - -# The maximum amount of time a message can sit in a log before we force a flush -#log.flush.interval.ms=1000 - -############################# Log Retention Policy ############################# - -# The following configurations control the disposal of log segments. The policy can -# be set to delete segments after a period of time, or after a given size has accumulated. -# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens -# from the end of the log. - -# The minimum age of a log file to be eligible for deletion due to age -log.retention.hours=168 - -# A size-based retention policy for logs. Segments are pruned from the log unless the remaining -# segments drop below log.retention.bytes. Functions independently of log.retention.hours. -#log.retention.bytes=1073741824 - -# The maximum size of a log segment file. When this size is reached a new log segment will be created. -log.segment.bytes=1073741824 - -# The interval at which log segments are checked to see if they can be deleted according -# to the retention policies -log.retention.check.interval.ms=300000 diff --git a/thirdparty/kafka_2.13-3.1.0/config/kraft/server.properties b/thirdparty/kafka_2.13-3.1.0/config/kraft/server.properties deleted file mode 100644 index 8e6406c..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/kraft/server.properties +++ /dev/null @@ -1,128 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# -# This configuration file is intended for use in KRaft mode, where -# Apache ZooKeeper is not present. See config/kraft/README.md for details. -# - -############################# Server Basics ############################# - -# The role of this server. Setting this puts us in KRaft mode -process.roles=broker,controller - -# The node id associated with this instance's roles -node.id=1 - -# The connect string for the controller quorum -controller.quorum.voters=1@localhost:9093 - -############################# Socket Server Settings ############################# - -# The address the socket server listens on. It will get the value returned from -# java.net.InetAddress.getCanonicalHostName() if not configured. -# FORMAT: -# listeners = listener_name://host_name:port -# EXAMPLE: -# listeners = PLAINTEXT://your.host.name:9092 -listeners=PLAINTEXT://:9092,CONTROLLER://:9093 -inter.broker.listener.name=PLAINTEXT - -# Hostname and port the broker will advertise to producers and consumers. If not set, -# it uses the value for "listeners" if configured. Otherwise, it will use the value -# returned from java.net.InetAddress.getCanonicalHostName(). -advertised.listeners=PLAINTEXT://localhost:9092 - -# Listener, host name, and port for the controller to advertise to the brokers. If -# this server is a controller, this listener must be configured. -controller.listener.names=CONTROLLER - -# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details -listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - -# The number of threads that the server uses for receiving requests from the network and sending responses to the network -num.network.threads=3 - -# The number of threads that the server uses for processing requests, which may include disk I/O -num.io.threads=8 - -# The send buffer (SO_SNDBUF) used by the socket server -socket.send.buffer.bytes=102400 - -# The receive buffer (SO_RCVBUF) used by the socket server -socket.receive.buffer.bytes=102400 - -# The maximum size of a request that the socket server will accept (protection against OOM) -socket.request.max.bytes=104857600 - - -############################# Log Basics ############################# - -# A comma separated list of directories under which to store log files -log.dirs=/tmp/kraft-combined-logs - -# The default number of log partitions per topic. More partitions allow greater -# parallelism for consumption, but this will also result in more files across -# the brokers. -num.partitions=1 - -# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. -# This value is recommended to be increased for installations with data dirs located in RAID array. -num.recovery.threads.per.data.dir=1 - -############################# Internal Topic Settings ############################# -# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offsets.topic.replication.factor=1 -transaction.state.log.replication.factor=1 -transaction.state.log.min.isr=1 - -############################# Log Flush Policy ############################# - -# Messages are immediately written to the filesystem but by default we only fsync() to sync -# the OS cache lazily. The following configurations control the flush of data to disk. -# There are a few important trade-offs here: -# 1. Durability: Unflushed data may be lost if you are not using replication. -# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. -# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. -# The settings below allow one to configure the flush policy to flush data after a period of time or -# every N messages (or both). This can be done globally and overridden on a per-topic basis. - -# The number of messages to accept before forcing a flush of data to disk -#log.flush.interval.messages=10000 - -# The maximum amount of time a message can sit in a log before we force a flush -#log.flush.interval.ms=1000 - -############################# Log Retention Policy ############################# - -# The following configurations control the disposal of log segments. The policy can -# be set to delete segments after a period of time, or after a given size has accumulated. -# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens -# from the end of the log. - -# The minimum age of a log file to be eligible for deletion due to age -log.retention.hours=168 - -# A size-based retention policy for logs. Segments are pruned from the log unless the remaining -# segments drop below log.retention.bytes. Functions independently of log.retention.hours. -#log.retention.bytes=1073741824 - -# The maximum size of a log segment file. When this size is reached a new log segment will be created. -log.segment.bytes=1073741824 - -# The interval at which log segments are checked to see if they can be deleted according -# to the retention policies -log.retention.check.interval.ms=300000 diff --git a/thirdparty/kafka_2.13-3.1.0/config/log4j.properties b/thirdparty/kafka_2.13-3.1.0/config/log4j.properties deleted file mode 100644 index 4cbce9d..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/log4j.properties +++ /dev/null @@ -1,91 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Unspecified loggers and loggers with additivity=true output to server.log and stdout -# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise -log4j.rootLogger=INFO, stdout, kafkaAppender - -log4j.appender.stdout=org.apache.log4j.ConsoleAppender -log4j.appender.stdout.layout=org.apache.log4j.PatternLayout -log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log -log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log -log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log -log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log -log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log -log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender -log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH -log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log -log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout -log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n - -# Change the line below to adjust ZK client logging -log4j.logger.org.apache.zookeeper=INFO - -# Change the two lines below to adjust the general broker logging level (output to server.log and stdout) -log4j.logger.kafka=INFO -log4j.logger.org.apache.kafka=INFO - -# Change to DEBUG or TRACE to enable request logging -log4j.logger.kafka.request.logger=WARN, requestAppender -log4j.additivity.kafka.request.logger=false - -# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output -# related to the handling of requests -#log4j.logger.kafka.network.Processor=TRACE, requestAppender -#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender -#log4j.additivity.kafka.server.KafkaApis=false -log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender -log4j.additivity.kafka.network.RequestChannel$=false - -log4j.logger.kafka.controller=TRACE, controllerAppender -log4j.additivity.kafka.controller=false - -log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender -log4j.additivity.kafka.log.LogCleaner=false - -log4j.logger.state.change.logger=INFO, stateChangeAppender -log4j.additivity.state.change.logger=false - -# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses -log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender -log4j.additivity.kafka.authorizer.logger=false - diff --git a/thirdparty/kafka_2.13-3.1.0/config/producer.properties b/thirdparty/kafka_2.13-3.1.0/config/producer.properties deleted file mode 100644 index 4786b98..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/producer.properties +++ /dev/null @@ -1,45 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# see org.apache.kafka.clients.producer.ProducerConfig for more details - -############################# Producer Basics ############################# - -# list of brokers used for bootstrapping knowledge about the rest of the cluster -# format: host1:port1,host2:port2 ... -bootstrap.servers=localhost:9092 - -# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd -compression.type=none - -# name of the partitioner class for partitioning events; default partition spreads data randomly -#partitioner.class= - -# the maximum amount of time the client will wait for the response of a request -#request.timeout.ms= - -# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for -#max.block.ms= - -# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together -#linger.ms= - -# the maximum size of a request in bytes -#max.request.size= - -# the default batch size in bytes when batching multiple records sent to a partition -#batch.size= - -# the total bytes of memory the producer can use to buffer records waiting to be sent to the server -#buffer.memory= diff --git a/thirdparty/kafka_2.13-3.1.0/config/server.properties b/thirdparty/kafka_2.13-3.1.0/config/server.properties deleted file mode 100644 index b1cf5c4..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/server.properties +++ /dev/null @@ -1,136 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# see kafka.server.KafkaConfig for additional details and defaults - -############################# Server Basics ############################# - -# The id of the broker. This must be set to a unique integer for each broker. -broker.id=0 - -############################# Socket Server Settings ############################# - -# The address the socket server listens on. It will get the value returned from -# java.net.InetAddress.getCanonicalHostName() if not configured. -# FORMAT: -# listeners = listener_name://host_name:port -# EXAMPLE: -# listeners = PLAINTEXT://your.host.name:9092 -#listeners=PLAINTEXT://:9092 - -# Hostname and port the broker will advertise to producers and consumers. If not set, -# it uses the value for "listeners" if configured. Otherwise, it will use the value -# returned from java.net.InetAddress.getCanonicalHostName(). -#advertised.listeners=PLAINTEXT://your.host.name:9092 - -# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details -#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - -# The number of threads that the server uses for receiving requests from the network and sending responses to the network -num.network.threads=3 - -# The number of threads that the server uses for processing requests, which may include disk I/O -num.io.threads=8 - -# The send buffer (SO_SNDBUF) used by the socket server -socket.send.buffer.bytes=102400 - -# The receive buffer (SO_RCVBUF) used by the socket server -socket.receive.buffer.bytes=102400 - -# The maximum size of a request that the socket server will accept (protection against OOM) -socket.request.max.bytes=104857600 - - -############################# Log Basics ############################# - -# A comma separated list of directories under which to store log files -log.dirs=/tmp/kafka-logs - -# The default number of log partitions per topic. More partitions allow greater -# parallelism for consumption, but this will also result in more files across -# the brokers. -num.partitions=1 - -# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. -# This value is recommended to be increased for installations with data dirs located in RAID array. -num.recovery.threads.per.data.dir=1 - -############################# Internal Topic Settings ############################# -# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" -# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3. -offsets.topic.replication.factor=1 -transaction.state.log.replication.factor=1 -transaction.state.log.min.isr=1 - -############################# Log Flush Policy ############################# - -# Messages are immediately written to the filesystem but by default we only fsync() to sync -# the OS cache lazily. The following configurations control the flush of data to disk. -# There are a few important trade-offs here: -# 1. Durability: Unflushed data may be lost if you are not using replication. -# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. -# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. -# The settings below allow one to configure the flush policy to flush data after a period of time or -# every N messages (or both). This can be done globally and overridden on a per-topic basis. - -# The number of messages to accept before forcing a flush of data to disk -#log.flush.interval.messages=10000 - -# The maximum amount of time a message can sit in a log before we force a flush -#log.flush.interval.ms=1000 - -############################# Log Retention Policy ############################# - -# The following configurations control the disposal of log segments. The policy can -# be set to delete segments after a period of time, or after a given size has accumulated. -# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens -# from the end of the log. - -# The minimum age of a log file to be eligible for deletion due to age -log.retention.hours=168 - -# A size-based retention policy for logs. Segments are pruned from the log unless the remaining -# segments drop below log.retention.bytes. Functions independently of log.retention.hours. -#log.retention.bytes=1073741824 - -# The maximum size of a log segment file. When this size is reached a new log segment will be created. -log.segment.bytes=1073741824 - -# The interval at which log segments are checked to see if they can be deleted according -# to the retention policies -log.retention.check.interval.ms=300000 - -############################# Zookeeper ############################# - -# Zookeeper connection string (see zookeeper docs for details). -# This is a comma separated host:port pairs, each corresponding to a zk -# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". -# You can also append an optional chroot string to the urls to specify the -# root directory for all kafka znodes. -zookeeper.connect=localhost:2181 - -# Timeout in ms for connecting to zookeeper -zookeeper.connection.timeout.ms=18000 - - -############################# Group Coordinator Settings ############################# - -# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. -# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. -# The default value for this is 3 seconds. -# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. -# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. -group.initial.rebalance.delay.ms=0 diff --git a/thirdparty/kafka_2.13-3.1.0/config/tools-log4j.properties b/thirdparty/kafka_2.13-3.1.0/config/tools-log4j.properties deleted file mode 100644 index b19e343..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/tools-log4j.properties +++ /dev/null @@ -1,21 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -log4j.rootLogger=WARN, stderr - -log4j.appender.stderr=org.apache.log4j.ConsoleAppender -log4j.appender.stderr.layout=org.apache.log4j.PatternLayout -log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n -log4j.appender.stderr.Target=System.err diff --git a/thirdparty/kafka_2.13-3.1.0/config/trogdor.conf b/thirdparty/kafka_2.13-3.1.0/config/trogdor.conf deleted file mode 100644 index 320cbe7..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/trogdor.conf +++ /dev/null @@ -1,25 +0,0 @@ -{ - "_comment": [ - "Licensed to the Apache Software Foundation (ASF) under one or more", - "contributor license agreements. See the NOTICE file distributed with", - "this work for additional information regarding copyright ownership.", - "The ASF licenses this file to You under the Apache License, Version 2.0", - "(the \"License\"); you may not use this file except in compliance with", - "the License. You may obtain a copy of the License at", - "", - "http://www.apache.org/licenses/LICENSE-2.0", - "", - "Unless required by applicable law or agreed to in writing, software", - "distributed under the License is distributed on an \"AS IS\" BASIS,", - "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.", - "See the License for the specific language governing permissions and", - "limitations under the License." - ], - "platform": "org.apache.kafka.trogdor.basic.BasicPlatform", "nodes": { - "node0": { - "hostname": "localhost", - "trogdor.agent.port": 8888, - "trogdor.coordinator.port": 8889 - } - } -} diff --git a/thirdparty/kafka_2.13-3.1.0/config/zookeeper.properties b/thirdparty/kafka_2.13-3.1.0/config/zookeeper.properties deleted file mode 100644 index 90f4332..0000000 --- a/thirdparty/kafka_2.13-3.1.0/config/zookeeper.properties +++ /dev/null @@ -1,24 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# the directory where the snapshot is stored. -dataDir=/tmp/zookeeper -# the port at which the clients will connect -clientPort=2181 -# disable the per-ip limit on the number of connections since this is a non-production config -maxClientCnxns=0 -# Disable the adminserver by default to avoid port conflicts. -# Set the port to something non-conflicting if choosing to enable this -admin.enableServer=false -# admin.serverPort=8080 diff --git a/thirdparty/kafka_2.13-3.1.0/libs/activation-1.1.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/activation-1.1.1.jar deleted file mode 100644 index 1b703ab..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/activation-1.1.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/aopalliance-repackaged-2.6.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/aopalliance-repackaged-2.6.1.jar deleted file mode 100644 index 35502f0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/aopalliance-repackaged-2.6.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/argparse4j-0.7.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/argparse4j-0.7.0.jar deleted file mode 100644 index b1865dd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/argparse4j-0.7.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/audience-annotations-0.5.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/audience-annotations-0.5.0.jar deleted file mode 100644 index 52491a7..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/audience-annotations-0.5.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/commons-cli-1.4.jar b/thirdparty/kafka_2.13-3.1.0/libs/commons-cli-1.4.jar deleted file mode 100644 index 22deb30..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/commons-cli-1.4.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/commons-lang3-3.8.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/commons-lang3-3.8.1.jar deleted file mode 100644 index 2c65ce6..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/commons-lang3-3.8.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-api-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-api-3.1.0.jar deleted file mode 100644 index 46d05dd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-api-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-basic-auth-extension-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-basic-auth-extension-3.1.0.jar deleted file mode 100644 index f8667a3..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-basic-auth-extension-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-file-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-file-3.1.0.jar deleted file mode 100644 index 5da9c08..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-file-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-json-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-json-3.1.0.jar deleted file mode 100644 index 62a2178..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-json-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-3.1.0.jar deleted file mode 100644 index 89bc5b7..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-client-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-client-3.1.0.jar deleted file mode 100644 index f726072..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-mirror-client-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-runtime-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-runtime-3.1.0.jar deleted file mode 100644 index 91d143e..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-runtime-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/connect-transforms-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/connect-transforms-3.1.0.jar deleted file mode 100644 index 6bdd9f3..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/connect-transforms-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/hk2-api-2.6.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/hk2-api-2.6.1.jar deleted file mode 100644 index 03d6eb0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/hk2-api-2.6.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/hk2-locator-2.6.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/hk2-locator-2.6.1.jar deleted file mode 100644 index 0906bd1..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/hk2-locator-2.6.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/hk2-utils-2.6.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/hk2-utils-2.6.1.jar deleted file mode 100644 index 768bc48..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/hk2-utils-2.6.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-annotations-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-annotations-2.12.3.jar deleted file mode 100644 index fd71f66..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-annotations-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-core-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-core-2.12.3.jar deleted file mode 100644 index 3062f8f..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-core-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-databind-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-databind-2.12.3.jar deleted file mode 100644 index 47efedf..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-databind-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-dataformat-csv-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-dataformat-csv-2.12.3.jar deleted file mode 100644 index b63fcf3..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-dataformat-csv-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-datatype-jdk8-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-datatype-jdk8-2.12.3.jar deleted file mode 100644 index 82d9ee6..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-datatype-jdk8-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-base-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-base-2.12.3.jar deleted file mode 100644 index bd3ea84..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-base-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-json-provider-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-json-provider-2.12.3.jar deleted file mode 100644 index 155dddc..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-jaxrs-json-provider-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-jaxb-annotations-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-jaxb-annotations-2.12.3.jar deleted file mode 100644 index d337a52..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-jaxb-annotations-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-scala_2.13-2.12.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-scala_2.13-2.12.3.jar deleted file mode 100644 index 7841316..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jackson-module-scala_2.13-2.12.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.activation-api-1.2.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.activation-api-1.2.1.jar deleted file mode 100644 index bbfb52f..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.activation-api-1.2.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.annotation-api-1.3.5.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.annotation-api-1.3.5.jar deleted file mode 100644 index 606d992..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.annotation-api-1.3.5.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.inject-2.6.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.inject-2.6.1.jar deleted file mode 100644 index cee6acd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.inject-2.6.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.validation-api-2.0.2.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.validation-api-2.0.2.jar deleted file mode 100644 index d68c9f7..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.validation-api-2.0.2.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.ws.rs-api-2.1.6.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.ws.rs-api-2.1.6.jar deleted file mode 100644 index 4850659..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.ws.rs-api-2.1.6.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.xml.bind-api-2.3.2.jar b/thirdparty/kafka_2.13-3.1.0/libs/jakarta.xml.bind-api-2.3.2.jar deleted file mode 100644 index b16236d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jakarta.xml.bind-api-2.3.2.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/javassist-3.27.0-GA.jar b/thirdparty/kafka_2.13-3.1.0/libs/javassist-3.27.0-GA.jar deleted file mode 100644 index 092e59b..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/javassist-3.27.0-GA.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/javax.servlet-api-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/javax.servlet-api-3.1.0.jar deleted file mode 100644 index 6b14c3d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/javax.servlet-api-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/javax.ws.rs-api-2.1.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/javax.ws.rs-api-2.1.1.jar deleted file mode 100644 index 3eabbf0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/javax.ws.rs-api-2.1.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jaxb-api-2.3.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/jaxb-api-2.3.0.jar deleted file mode 100644 index 0817c08..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jaxb-api-2.3.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-client-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-client-2.34.jar deleted file mode 100644 index 16e921b..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-client-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-common-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-common-2.34.jar deleted file mode 100644 index fccbd99..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-common-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-2.34.jar deleted file mode 100644 index 5571301..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-core-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-core-2.34.jar deleted file mode 100644 index 6615bb9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-container-servlet-core-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-hk2-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-hk2-2.34.jar deleted file mode 100644 index a5b4536..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-hk2-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jersey-server-2.34.jar b/thirdparty/kafka_2.13-3.1.0/libs/jersey-server-2.34.jar deleted file mode 100644 index 34da9f4..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jersey-server-2.34.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-client-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-client-9.4.43.v20210629.jar deleted file mode 100644 index deb0a7d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-client-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-continuation-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-continuation-9.4.43.v20210629.jar deleted file mode 100644 index a79a7d8..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-continuation-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-http-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-http-9.4.43.v20210629.jar deleted file mode 100644 index e91a8f8..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-http-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-io-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-io-9.4.43.v20210629.jar deleted file mode 100644 index 00427a0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-io-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-security-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-security-9.4.43.v20210629.jar deleted file mode 100644 index 2ad61a9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-security-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-server-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-server-9.4.43.v20210629.jar deleted file mode 100644 index c13e495..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-server-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlet-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlet-9.4.43.v20210629.jar deleted file mode 100644 index 8d12198..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlet-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlets-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlets-9.4.43.v20210629.jar deleted file mode 100644 index b540a24..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-servlets-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-9.4.43.v20210629.jar deleted file mode 100644 index 472385e..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-ajax-9.4.43.v20210629.jar b/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-ajax-9.4.43.v20210629.jar deleted file mode 100644 index 2d9f1e9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jetty-util-ajax-9.4.43.v20210629.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jline-3.12.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/jline-3.12.1.jar deleted file mode 100644 index fcb8d4d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jline-3.12.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jopt-simple-5.0.4.jar b/thirdparty/kafka_2.13-3.1.0/libs/jopt-simple-5.0.4.jar deleted file mode 100644 index 317b2b0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jopt-simple-5.0.4.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/jose4j-0.7.8.jar b/thirdparty/kafka_2.13-3.1.0/libs/jose4j-0.7.8.jar deleted file mode 100644 index b3c1ccd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/jose4j-0.7.8.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-clients-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-clients-3.1.0.jar deleted file mode 100644 index 5ec6f15..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-clients-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-log4j-appender-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-log4j-appender-3.1.0.jar deleted file mode 100644 index 769935f..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-log4j-appender-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-metadata-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-metadata-3.1.0.jar deleted file mode 100644 index 79400e5..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-metadata-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-raft-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-raft-3.1.0.jar deleted file mode 100644 index 54f12e9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-raft-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-server-common-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-server-common-3.1.0.jar deleted file mode 100644 index 9c2dbf4..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-server-common-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-shell-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-shell-3.1.0.jar deleted file mode 100644 index 48729fe..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-shell-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-3.1.0.jar deleted file mode 100644 index 266fa79..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-api-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-api-3.1.0.jar deleted file mode 100644 index 3bdc24d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-storage-api-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-3.1.0.jar deleted file mode 100644 index a2c65ba..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-examples-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-examples-3.1.0.jar deleted file mode 100644 index b72480a..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-examples-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-scala_2.13-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-scala_2.13-3.1.0.jar deleted file mode 100644 index 7a520f9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-scala_2.13-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-test-utils-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-test-utils-3.1.0.jar deleted file mode 100644 index d80a9d3..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-streams-test-utils-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka-tools-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka-tools-3.1.0.jar deleted file mode 100644 index 6170d03..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka-tools-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/kafka_2.13-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/kafka_2.13-3.1.0.jar deleted file mode 100644 index b38b3cd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/kafka_2.13-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/log4j-1.2.17.jar b/thirdparty/kafka_2.13-3.1.0/libs/log4j-1.2.17.jar deleted file mode 100644 index 1d425cf..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/log4j-1.2.17.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/lz4-java-1.8.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/lz4-java-1.8.0.jar deleted file mode 100644 index 89c644b..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/lz4-java-1.8.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/maven-artifact-3.8.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/maven-artifact-3.8.1.jar deleted file mode 100644 index 8d71bf0..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/maven-artifact-3.8.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-2.2.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-2.2.0.jar deleted file mode 100644 index 0f6d1cb..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-2.2.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-4.1.12.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-4.1.12.1.jar deleted file mode 100644 index 94fc834..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/metrics-core-4.1.12.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-buffer-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-buffer-4.1.68.Final.jar deleted file mode 100644 index cb26d98..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-buffer-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-codec-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-codec-4.1.68.Final.jar deleted file mode 100644 index 27b88d8..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-codec-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-common-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-common-4.1.68.Final.jar deleted file mode 100644 index c313345..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-common-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-handler-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-handler-4.1.68.Final.jar deleted file mode 100644 index b240bf7..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-handler-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-resolver-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-resolver-4.1.68.Final.jar deleted file mode 100644 index 0a129e5..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-resolver-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-4.1.68.Final.jar deleted file mode 100644 index fd55e5d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-epoll-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-epoll-4.1.68.Final.jar deleted file mode 100644 index 2996050..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-epoll-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-unix-common-4.1.68.Final.jar b/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-unix-common-4.1.68.Final.jar deleted file mode 100644 index 5252b3d..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/netty-transport-native-unix-common-4.1.68.Final.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/osgi-resource-locator-1.0.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/osgi-resource-locator-1.0.3.jar deleted file mode 100644 index 0f3c386..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/osgi-resource-locator-1.0.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/paranamer-2.8.jar b/thirdparty/kafka_2.13-3.1.0/libs/paranamer-2.8.jar deleted file mode 100644 index 0bf659b..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/paranamer-2.8.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/plexus-utils-3.2.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/plexus-utils-3.2.1.jar deleted file mode 100644 index d749dd7..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/plexus-utils-3.2.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/reflections-0.9.12.jar b/thirdparty/kafka_2.13-3.1.0/libs/reflections-0.9.12.jar deleted file mode 100644 index 0f176b9..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/reflections-0.9.12.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/rocksdbjni-6.22.1.1.jar b/thirdparty/kafka_2.13-3.1.0/libs/rocksdbjni-6.22.1.1.jar deleted file mode 100644 index b2c789a..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/rocksdbjni-6.22.1.1.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/scala-collection-compat_2.13-2.4.4.jar b/thirdparty/kafka_2.13-3.1.0/libs/scala-collection-compat_2.13-2.4.4.jar deleted file mode 100644 index 3cc12fd..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/scala-collection-compat_2.13-2.4.4.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/scala-java8-compat_2.13-1.0.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/scala-java8-compat_2.13-1.0.0.jar deleted file mode 100644 index e455e02..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/scala-java8-compat_2.13-1.0.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/scala-library-2.13.6.jar b/thirdparty/kafka_2.13-3.1.0/libs/scala-library-2.13.6.jar deleted file mode 100644 index 16695c8..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/scala-library-2.13.6.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/scala-logging_2.13-3.9.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/scala-logging_2.13-3.9.3.jar deleted file mode 100644 index b7a43ae..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/scala-logging_2.13-3.9.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/scala-reflect-2.13.6.jar b/thirdparty/kafka_2.13-3.1.0/libs/scala-reflect-2.13.6.jar deleted file mode 100644 index eb6f936..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/scala-reflect-2.13.6.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/slf4j-api-1.7.30.jar b/thirdparty/kafka_2.13-3.1.0/libs/slf4j-api-1.7.30.jar deleted file mode 100644 index 29ac26f..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/slf4j-api-1.7.30.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/slf4j-log4j12-1.7.30.jar b/thirdparty/kafka_2.13-3.1.0/libs/slf4j-log4j12-1.7.30.jar deleted file mode 100644 index c6bc8b2..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/slf4j-log4j12-1.7.30.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/snappy-java-1.1.8.4.jar b/thirdparty/kafka_2.13-3.1.0/libs/snappy-java-1.1.8.4.jar deleted file mode 100644 index aa5231e..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/snappy-java-1.1.8.4.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/trogdor-3.1.0.jar b/thirdparty/kafka_2.13-3.1.0/libs/trogdor-3.1.0.jar deleted file mode 100644 index be8c01b..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/trogdor-3.1.0.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-3.6.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-3.6.3.jar deleted file mode 100644 index 1c65199..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-3.6.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-jute-3.6.3.jar b/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-jute-3.6.3.jar deleted file mode 100644 index af5450e..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/zookeeper-jute-3.6.3.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/libs/zstd-jni-1.5.0-4.jar b/thirdparty/kafka_2.13-3.1.0/libs/zstd-jni-1.5.0-4.jar deleted file mode 100644 index f51ad88..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/libs/zstd-jni-1.5.0-4.jar and /dev/null differ diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/CDDL+GPL-1.1 b/thirdparty/kafka_2.13-3.1.0/licenses/CDDL+GPL-1.1 deleted file mode 100644 index 4b156e6..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/CDDL+GPL-1.1 +++ /dev/null @@ -1,760 +0,0 @@ -COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.1 - -1. Definitions. - - 1.1. "Contributor" means each individual or entity that creates or - contributes to the creation of Modifications. - - 1.2. "Contributor Version" means the combination of the Original - Software, prior Modifications used by a Contributor (if any), and - the Modifications made by that particular Contributor. - - 1.3. "Covered Software" means (a) the Original Software, or (b) - Modifications, or (c) the combination of files containing Original - Software with files containing Modifications, in each case including - portions thereof. - - 1.4. "Executable" means the Covered Software in any form other than - Source Code. - - 1.5. "Initial Developer" means the individual or entity that first - makes Original Software available under this License. - - 1.6. "Larger Work" means a work which combines Covered Software or - portions thereof with code not governed by the terms of this License. - - 1.7. "License" means this document. - - 1.8. "Licensable" means having the right to grant, to the maximum - extent possible, whether at the time of the initial grant or - subsequently acquired, any and all of the rights conveyed herein. - - 1.9. "Modifications" means the Source Code and Executable form of - any of the following: - - A. Any file that results from an addition to, deletion from or - modification of the contents of a file containing Original Software - or previous Modifications; - - B. Any new file that contains any part of the Original Software or - previous Modification; or - - C. Any new file that is contributed or otherwise made available - under the terms of this License. - - 1.10. "Original Software" means the Source Code and Executable form - of computer software code that is originally released under this - License. - - 1.11. "Patent Claims" means any patent claim(s), now owned or - hereafter acquired, including without limitation, method, process, - and apparatus claims, in any patent Licensable by grantor. - - 1.12. "Source Code" means (a) the common form of computer software - code in which modifications are made and (b) associated - documentation included in or with such code. - - 1.13. "You" (or "Your") means an individual or a legal entity - exercising rights under, and complying with all of the terms of, - this License. For legal entities, "You" includes any entity which - controls, is controlled by, or is under common control with You. For - purposes of this definition, "control" means (a) the power, direct - or indirect, to cause the direction or management of such entity, - whether by contract or otherwise, or (b) ownership of more than - fifty percent (50%) of the outstanding shares or beneficial - ownership of such entity. - -2. License Grants. - - 2.1. The Initial Developer Grant. - - Conditioned upon Your compliance with Section 3.1 below and subject - to third party intellectual property claims, the Initial Developer - hereby grants You a world-wide, royalty-free, non-exclusive license: - - (a) under intellectual property rights (other than patent or - trademark) Licensable by Initial Developer, to use, reproduce, - modify, display, perform, sublicense and distribute the Original - Software (or portions thereof), with or without Modifications, - and/or as part of a Larger Work; and - - (b) under Patent Claims infringed by the making, using or selling of - Original Software, to make, have made, use, practice, sell, and - offer for sale, and/or otherwise dispose of the Original Software - (or portions thereof). - - (c) The licenses granted in Sections 2.1(a) and (b) are effective on - the date Initial Developer first distributes or otherwise makes the - Original Software available to a third party under the terms of this - License. - - (d) Notwithstanding Section 2.1(b) above, no patent license is - granted: (1) for code that You delete from the Original Software, or - (2) for infringements caused by: (i) the modification of the - Original Software, or (ii) the combination of the Original Software - with other software or devices. - - 2.2. Contributor Grant. - - Conditioned upon Your compliance with Section 3.1 below and subject - to third party intellectual property claims, each Contributor hereby - grants You a world-wide, royalty-free, non-exclusive license: - - (a) under intellectual property rights (other than patent or - trademark) Licensable by Contributor to use, reproduce, modify, - display, perform, sublicense and distribute the Modifications - created by such Contributor (or portions thereof), either on an - unmodified basis, with other Modifications, as Covered Software - and/or as part of a Larger Work; and - - (b) under Patent Claims infringed by the making, using, or selling - of Modifications made by that Contributor either alone and/or in - combination with its Contributor Version (or portions of such - combination), to make, use, sell, offer for sale, have made, and/or - otherwise dispose of: (1) Modifications made by that Contributor (or - portions thereof); and (2) the combination of Modifications made by - that Contributor with its Contributor Version (or portions of such - combination). - - (c) The licenses granted in Sections 2.2(a) and 2.2(b) are effective - on the date Contributor first distributes or otherwise makes the - Modifications available to a third party. - - (d) Notwithstanding Section 2.2(b) above, no patent license is - granted: (1) for any code that Contributor has deleted from the - Contributor Version; (2) for infringements caused by: (i) third - party modifications of Contributor Version, or (ii) the combination - of Modifications made by that Contributor with other software - (except as part of the Contributor Version) or other devices; or (3) - under Patent Claims infringed by Covered Software in the absence of - Modifications made by that Contributor. - -3. Distribution Obligations. - - 3.1. Availability of Source Code. - - Any Covered Software that You distribute or otherwise make available - in Executable form must also be made available in Source Code form - and that Source Code form must be distributed only under the terms - of this License. You must include a copy of this License with every - copy of the Source Code form of the Covered Software You distribute - or otherwise make available. You must inform recipients of any such - Covered Software in Executable form as to how they can obtain such - Covered Software in Source Code form in a reasonable manner on or - through a medium customarily used for software exchange. - - 3.2. Modifications. - - The Modifications that You create or to which You contribute are - governed by the terms of this License. You represent that You - believe Your Modifications are Your original creation(s) and/or You - have sufficient rights to grant the rights conveyed by this License. - - 3.3. Required Notices. - - You must include a notice in each of Your Modifications that - identifies You as the Contributor of the Modification. You may not - remove or alter any copyright, patent or trademark notices contained - within the Covered Software, or any notices of licensing or any - descriptive text giving attribution to any Contributor or the - Initial Developer. - - 3.4. Application of Additional Terms. - - You may not offer or impose any terms on any Covered Software in - Source Code form that alters or restricts the applicable version of - this License or the recipients' rights hereunder. You may choose to - offer, and to charge a fee for, warranty, support, indemnity or - liability obligations to one or more recipients of Covered Software. - However, you may do so only on Your own behalf, and not on behalf of - the Initial Developer or any Contributor. You must make it - absolutely clear that any such warranty, support, indemnity or - liability obligation is offered by You alone, and You hereby agree - to indemnify the Initial Developer and every Contributor for any - liability incurred by the Initial Developer or such Contributor as a - result of warranty, support, indemnity or liability terms You offer. - - 3.5. Distribution of Executable Versions. - - You may distribute the Executable form of the Covered Software under - the terms of this License or under the terms of a license of Your - choice, which may contain terms different from this License, - provided that You are in compliance with the terms of this License - and that the license for the Executable form does not attempt to - limit or alter the recipient's rights in the Source Code form from - the rights set forth in this License. If You distribute the Covered - Software in Executable form under a different license, You must make - it absolutely clear that any terms which differ from this License - are offered by You alone, not by the Initial Developer or - Contributor. You hereby agree to indemnify the Initial Developer and - every Contributor for any liability incurred by the Initial - Developer or such Contributor as a result of any such terms You offer. - - 3.6. Larger Works. - - You may create a Larger Work by combining Covered Software with - other code not governed by the terms of this License and distribute - the Larger Work as a single product. In such a case, You must make - sure the requirements of this License are fulfilled for the Covered - Software. - -4. Versions of the License. - - 4.1. New Versions. - - Oracle is the initial license steward and may publish revised and/or - new versions of this License from time to time. Each version will be - given a distinguishing version number. Except as provided in Section - 4.3, no one other than the license steward has the right to modify - this License. - - 4.2. Effect of New Versions. - - You may always continue to use, distribute or otherwise make the - Covered Software available under the terms of the version of the - License under which You originally received the Covered Software. If - the Initial Developer includes a notice in the Original Software - prohibiting it from being distributed or otherwise made available - under any subsequent version of the License, You must distribute and - make the Covered Software available under the terms of the version - of the License under which You originally received the Covered - Software. Otherwise, You may also choose to use, distribute or - otherwise make the Covered Software available under the terms of any - subsequent version of the License published by the license steward. - - 4.3. Modified Versions. - - When You are an Initial Developer and You want to create a new - license for Your Original Software, You may create and use a - modified version of this License if You: (a) rename the license and - remove any references to the name of the license steward (except to - note that the license differs from this License); and (b) otherwise - make it clear that the license contains terms which differ from this - License. - -5. DISCLAIMER OF WARRANTY. - - COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, - WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, - INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE - IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR - NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF - THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE - DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY - OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, - REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN - ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS - AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER. - -6. TERMINATION. - - 6.1. This License and the rights granted hereunder will terminate - automatically if You fail to comply with terms herein and fail to - cure such breach within 30 days of becoming aware of the breach. - Provisions which, by their nature, must remain in effect beyond the - termination of this License shall survive. - - 6.2. If You assert a patent infringement claim (excluding - declaratory judgment actions) against Initial Developer or a - Contributor (the Initial Developer or Contributor against whom You - assert such claim is referred to as "Participant") alleging that the - Participant Software (meaning the Contributor Version where the - Participant is a Contributor or the Original Software where the - Participant is the Initial Developer) directly or indirectly - infringes any patent, then any and all rights granted directly or - indirectly to You by such Participant, the Initial Developer (if the - Initial Developer is not the Participant) and all Contributors under - Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice - from Participant terminate prospectively and automatically at the - expiration of such 60 day notice period, unless if within such 60 - day period You withdraw Your claim with respect to the Participant - Software against such Participant either unilaterally or pursuant to - a written agreement with Participant. - - 6.3. If You assert a patent infringement claim against Participant - alleging that the Participant Software directly or indirectly - infringes any patent where such claim is resolved (such as by - license or settlement) prior to the initiation of patent - infringement litigation, then the reasonable value of the licenses - granted by such Participant under Sections 2.1 or 2.2 shall be taken - into account in determining the amount or value of any payment or - license. - - 6.4. In the event of termination under Sections 6.1 or 6.2 above, - all end user licenses that have been validly granted by You or any - distributor hereunder prior to termination (excluding licenses - granted to You by any distributor) shall survive termination. - -7. LIMITATION OF LIABILITY. - - UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT - (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE - INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF - COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE - TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR - CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT - LIMITATION, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER - FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR - LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE - POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT - APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH - PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH - LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR - LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION - AND LIMITATION MAY NOT APPLY TO YOU. - -8. U.S. GOVERNMENT END USERS. - - The Covered Software is a "commercial item," as that term is defined - in 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer - software" (as that term is defined at 48 C.F.R. § - 252.227-7014(a)(1)) and "commercial computer software documentation" - as such terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent - with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 - (June 1995), all U.S. Government End Users acquire Covered Software - with only those rights set forth herein. This U.S. Government Rights - clause is in lieu of, and supersedes, any other FAR, DFAR, or other - clause or provision that addresses Government rights in computer - software under this License. - -9. MISCELLANEOUS. - - This License represents the complete agreement concerning subject - matter hereof. If any provision of this License is held to be - unenforceable, such provision shall be reformed only to the extent - necessary to make it enforceable. This License shall be governed by - the law of the jurisdiction specified in a notice contained within - the Original Software (except to the extent applicable law, if any, - provides otherwise), excluding such jurisdiction's conflict-of-law - provisions. Any litigation relating to this License shall be subject - to the jurisdiction of the courts located in the jurisdiction and - venue specified in a notice contained within the Original Software, - with the losing party responsible for costs, including, without - limitation, court costs and reasonable attorneys' fees and expenses. - The application of the United Nations Convention on Contracts for - the International Sale of Goods is expressly excluded. Any law or - regulation which provides that the language of a contract shall be - construed against the drafter shall not apply to this License. You - agree that You alone are responsible for compliance with the United - States export administration regulations (and the export control - laws and regulation of any other countries) when You use, distribute - or otherwise make available any Covered Software. - -10. RESPONSIBILITY FOR CLAIMS. - - As between Initial Developer and the Contributors, each party is - responsible for claims and damages arising, directly or indirectly, - out of its utilization of rights under this License and You agree to - work with Initial Developer and Contributors to distribute such - responsibility on an equitable basis. Nothing herein is intended or - shall be deemed to constitute any admission of liability. - ------------------------------------------------------------------------- - -NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION -LICENSE (CDDL) - -The code released under the CDDL shall be governed by the laws of the -State of California (excluding conflict-of-law provisions). Any -litigation relating to this License shall be subject to the jurisdiction -of the Federal Courts of the Northern District of California and the -state courts of the State of California, with venue lying in Santa Clara -County, California. - - - - The GNU General Public License (GPL) Version 2, June 1991 - -Copyright (C) 1989, 1991 Free Software Foundation, Inc. -51 Franklin Street, Fifth Floor -Boston, MA 02110-1335 -USA - -Everyone is permitted to copy and distribute verbatim copies -of this license document, but changing it is not allowed. - -Preamble - -The licenses for most software are designed to take away your freedom to -share and change it. By contrast, the GNU General Public License is -intended to guarantee your freedom to share and change free software--to -make sure the software is free for all its users. This General Public -License applies to most of the Free Software Foundation's software and -to any other program whose authors commit to using it. (Some other Free -Software Foundation software is covered by the GNU Library General -Public License instead.) You can apply it to your programs, too. - -When we speak of free software, we are referring to freedom, not price. -Our General Public Licenses are designed to make sure that you have the -freedom to distribute copies of free software (and charge for this -service if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs; and that you know you can do these things. - -To protect your rights, we need to make restrictions that forbid anyone -to deny you these rights or to ask you to surrender the rights. These -restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - -For example, if you distribute copies of such a program, whether gratis -or for a fee, you must give the recipients all the rights that you have. -You must make sure that they, too, receive or can get the source code. -And you must show them these terms so they know their rights. - -We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - -Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - -Finally, any free program is threatened constantly by software patents. -We wish to avoid the danger that redistributors of a free program will -individually obtain patent licenses, in effect making the program -proprietary. To prevent this, we have made it clear that any patent must -be licensed for everyone's free use or not licensed at all. - -The precise terms and conditions for copying, distribution and -modification follow. - -TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - -0. This License applies to any program or other work which contains a -notice placed by the copyright holder saying it may be distributed under -the terms of this General Public License. The "Program", below, refers -to any such program or work, and a "work based on the Program" means -either the Program or any derivative work under copyright law: that is -to say, a work containing the Program or a portion of it, either -verbatim or with modifications and/or translated into another language. -(Hereinafter, translation is included without limitation in the term -"modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of running -the Program is not restricted, and the output from the Program is -covered only if its contents constitute a work based on the Program -(independent of having been made by running the Program). Whether that -is true depends on what the Program does. - -1. You may copy and distribute verbatim copies of the Program's source -code as you receive it, in any medium, provided that you conspicuously -and appropriately publish on each copy an appropriate copyright notice -and disclaimer of warranty; keep intact all the notices that refer to -this License and to the absence of any warranty; and give any other -recipients of the Program a copy of this License along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - -2. You may modify your copy or copies of the Program or any portion of -it, thus forming a work based on the Program, and copy and distribute -such modifications or work under the terms of Section 1 above, provided -that you also meet all of these conditions: - - a) You must cause the modified files to carry prominent notices - stating that you changed the files and the date of any change. - - b) You must cause any work that you distribute or publish, that in - whole or in part contains or is derived from the Program or any part - thereof, to be licensed as a whole at no charge to all third parties - under the terms of this License. - - c) If the modified program normally reads commands interactively - when run, you must cause it, when started running for such - interactive use in the most ordinary way, to print or display an - announcement including an appropriate copyright notice and a notice - that there is no warranty (or else, saying that you provide a - warranty) and that users may redistribute the program under these - conditions, and telling the user how to view a copy of this License. - (Exception: if the Program itself is interactive but does not - normally print such an announcement, your work based on the Program - is not required to print an announcement.) - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, and -can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based on -the Program, the distribution of the whole must be on the terms of this -License, whose permissions for other licensees extend to the entire -whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of a -storage or distribution medium does not bring the other work under the -scope of this License. - -3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - - a) Accompany it with the complete corresponding machine-readable - source code, which must be distributed under the terms of Sections 1 - and 2 above on a medium customarily used for software interchange; or, - - b) Accompany it with a written offer, valid for at least three - years, to give any third party, for a charge no more than your cost - of physically performing source distribution, a complete - machine-readable copy of the corresponding source code, to be - distributed under the terms of Sections 1 and 2 above on a medium - customarily used for software interchange; or, - - c) Accompany it with the information you received as to the offer to - distribute corresponding source code. (This alternative is allowed - only for noncommercial distribution and only if you received the - program in object code or executable form with such an offer, in - accord with Subsection b above.) - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source code -means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to control -compilation and installation of the executable. However, as a special -exception, the source code distributed need not include anything that is -normally distributed (in either source or binary form) with the major -components (compiler, kernel, and so on) of the operating system on -which the executable runs, unless that component itself accompanies the -executable. - -If distribution of executable or object code is made by offering access -to copy from a designated place, then offering equivalent access to copy -the source code from the same place counts as distribution of the source -code, even though third parties are not compelled to copy the source -along with the object code. - -4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt otherwise -to copy, modify, sublicense or distribute the Program is void, and will -automatically terminate your rights under this License. However, parties -who have received copies, or rights, from you under this License will -not have their licenses terminated so long as such parties remain in -full compliance. - -5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and all -its terms and conditions for copying, distributing or modifying the -Program or works based on it. - -6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further restrictions -on the recipients' exercise of the rights granted herein. You are not -responsible for enforcing compliance by third parties to this License. - -7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot distribute -so as to satisfy simultaneously your obligations under this License and -any other pertinent obligations, then as a consequence you may not -distribute the Program at all. For example, if a patent license would -not permit royalty-free redistribution of the Program by all those who -receive copies directly or indirectly through you, then the only way you -could satisfy both it and this License would be to refrain entirely from -distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is implemented -by public license practices. Many people have made generous -contributions to the wide range of software distributed through that -system in reliance on consistent application of that system; it is up to -the author/donor to decide if he or she is willing to distribute -software through any other system and a licensee cannot impose that choice. - -This section is intended to make thoroughly clear what is believed to be -a consequence of the rest of this License. - -8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License may -add an explicit geographical distribution limitation excluding those -countries, so that distribution is permitted only in or among countries -not thus excluded. In such case, this License incorporates the -limitation as if written in the body of this License. - -9. The Free Software Foundation may publish revised and/or new -versions of the General Public License from time to time. Such new -versions will be similar in spirit to the present version, but may -differ in detail to address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and -conditions either of that version or of any later version published by -the Free Software Foundation. If the Program does not specify a version -number of this License, you may choose any version ever published by the -Free Software Foundation. - -10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the -author to ask for permission. For software which is copyrighted by the -Free Software Foundation, write to the Free Software Foundation; we -sometimes make exceptions for this. Our decision will be guided by the -two goals of preserving the free status of all derivatives of our free -software and of promoting the sharing and reuse of software generally. - -NO WARRANTY - -11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO -WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. -EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR -OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, -EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE -ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH -YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL -NECESSARY SERVICING, REPAIR OR CORRECTION. - -12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN -WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY -AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR -DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL -DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM -(INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED -INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF -THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR -OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. - -END OF TERMS AND CONDITIONS - -How to Apply These Terms to Your New Programs - -If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - -To do so, attach the following notices to the program. It is safest to -attach them to the start of each source file to most effectively convey -the exclusion of warranty; and each file should have at least the -"copyright" line and a pointer to where the full notice is found. - - One line to give the program's name and a brief idea of what it does. - Copyright (C) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, but - WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1335 USA - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - - Gnomovision version 69, Copyright (C) year name of author - Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type - `show w'. This is free software, and you are welcome to redistribute - it under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the -appropriate parts of the General Public License. Of course, the commands -you use may be called something other than `show w' and `show c'; they -could even be mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - - Yoyodyne, Inc., hereby disclaims all copyright interest in the - program `Gnomovision' (which makes passes at compilers) written by - James Hacker. - - signature of Ty Coon, 1 April 1989 - Ty Coon, President of Vice - -This General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications -with the library. If this is what you want to do, use the GNU Library -General Public License instead of this License. - -# - -Certain source files distributed by Oracle America, Inc. and/or its -affiliates are subject to the following clarification and special -exception to the GPLv2, based on the GNU Project exception for its -Classpath libraries, known as the GNU Classpath Exception, but only -where Oracle has expressly included in the particular source file's -header the words "Oracle designates this particular file as subject to -the "Classpath" exception as provided by Oracle in the LICENSE file -that accompanied this code." - -You should also note that Oracle includes multiple, independent -programs in this software package. Some of those programs are provided -under licenses deemed incompatible with the GPLv2 by the Free Software -Foundation and others. For example, the package includes programs -licensed under the Apache License, Version 2.0. Such programs are -licensed to you under their original licenses. - -Oracle facilitates your further distribution of this package by adding -the Classpath Exception to the necessary parts of its GPLv2 code, which -permits you to use that code in combination with other independent -modules not licensed under the GPLv2. However, note that this would -not permit you to commingle code under an incompatible license with -Oracle's GPLv2 licensed code by, for example, cutting and pasting such -code into a file also containing Oracle's GPLv2 licensed code and then -distributing the result. Additionally, if you were to remove the -Classpath Exception from any of the files to which it applies and -distribute the result, you would likely be required to license some or -all of the other code in that distribution under the GPLv2 as well, and -since the GPLv2 is incompatible with the license terms of some items -included in the distribution by Oracle, removing the Classpath -Exception could therefore effectively compromise your ability to -further distribute the package. - -Proceed with caution and we recommend that you obtain the advice of a -lawyer skilled in open source matters before removing the Classpath -Exception or making modifications to this package which may -subsequently be redistributed and/or involve the use of third party -software. - -CLASSPATH EXCEPTION -Linking this library statically or dynamically with other modules is -making a combined work based on this library. Thus, the terms and -conditions of the GNU General Public License version 2 cover the whole -combination. - -As a special exception, the copyright holders of this library give you -permission to link this library with independent modules to produce an -executable, regardless of the license terms of these independent -modules, and to copy and distribute the resulting executable under -terms of your choice, provided that you also meet, for each linked -independent module, the terms and conditions of the license of that -module. An independent module is a module which is not derived from or -based on this library. If you modify this library, you may extend this -exception to your version of the library, but you are not obligated to -do so. If you do not wish to do so, delete this exception statement -from your version. - diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/DWTFYWTPL b/thirdparty/kafka_2.13-3.1.0/licenses/DWTFYWTPL deleted file mode 100644 index 5a8e332..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/DWTFYWTPL +++ /dev/null @@ -1,14 +0,0 @@ - DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE - Version 2, December 2004 - - Copyright (C) 2004 Sam Hocevar - - Everyone is permitted to copy and distribute verbatim or modified - copies of this license document, and changing it is allowed as long - as the name is changed. - - DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE - TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - 0. You just DO WHAT THE FUCK YOU WANT TO. - diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/argparse-MIT b/thirdparty/kafka_2.13-3.1.0/licenses/argparse-MIT deleted file mode 100644 index 773b0df..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/argparse-MIT +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright (C) 2011-2017 Tatsuhiro Tsujikawa - * - * Permission is hereby granted, free of charge, to any person - * obtaining a copy of this software and associated documentation - * files (the "Software"), to deal in the Software without - * restriction, including without limitation the rights to use, copy, - * modify, merge, publish, distribute, sublicense, and/or sell copies - * of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-distribution-license-1.0 b/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-distribution-license-1.0 deleted file mode 100644 index 5f06513..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-distribution-license-1.0 +++ /dev/null @@ -1,13 +0,0 @@ -Eclipse Distribution License - v 1.0 - -Copyright (c) 2007, Eclipse Foundation, Inc. and its licensors. - -All rights reserved. - -Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. -* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. -* Neither the name of the Eclipse Foundation, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-public-license-2.0 b/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-public-license-2.0 deleted file mode 100644 index c9f1425..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/eclipse-public-license-2.0 +++ /dev/null @@ -1,87 +0,0 @@ -Eclipse Public License - v 2.0 - -THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE PUBLIC LICENSE (“AGREEMENT”). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT. -1. DEFINITIONS - -“Contribution” means: - - a) in the case of the initial Contributor, the initial content Distributed under this Agreement, and - b) in the case of each subsequent Contributor: - i) changes to the Program, and - ii) additions to the Program; - where such changes and/or additions to the Program originate from and are Distributed by that particular Contributor. A Contribution “originates” from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include changes or additions to the Program that are not Modified Works. - -“Contributor” means any person or entity that Distributes the Program. - -“Licensed Patents” mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program. - -“Program” means the Contributions Distributed in accordance with this Agreement. - -“Recipient” means anyone who receives the Program under this Agreement or any Secondary License (as applicable), including Contributors. - -“Derivative Works” shall mean any work, whether in Source Code or other form, that is based on (or derived from) the Program and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. - -“Modified Works” shall mean any work in Source Code or other form that results from an addition to, deletion from, or modification of the contents of the Program, including, for purposes of clarity any new file in Source Code form that contains any contents of the Program. Modified Works shall not include works that contain only declarations, interfaces, types, classes, structures, or files of the Program solely in each case in order to link to, bind by name, or subclass the Program or Modified Works thereof. - -“Distribute” means the acts of a) distributing or b) making available in any manner that enables the transfer of a copy. - -“Source Code” means the form of a Program preferred for making modifications, including but not limited to software source code, documentation source, and configuration files. - -“Secondary License” means either the GNU General Public License, Version 2.0, or any later versions of that license, including any exceptions or additional permissions as identified by the initial Contributor. -2. GRANT OF RIGHTS - - a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, Distribute and sublicense the Contribution of such Contributor, if any, and such Derivative Works. - b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in Source Code or other form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder. - c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to Distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program. - d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement. - e) Notwithstanding the terms of any Secondary License, no Contributor makes additional grants to any Recipient (other than those set forth in this Agreement) as a result of such Recipient's receipt of the Program under the terms of a Secondary License (if permitted under the terms of Section 3). - -3. REQUIREMENTS - -3.1 If a Contributor Distributes the Program in any form, then: - - a) the Program must also be made available as Source Code, in accordance with section 3.2, and the Contributor must accompany the Program with a statement that the Source Code for the Program is available under this Agreement, and informs Recipients how to obtain it in a reasonable manner on or through a medium customarily used for software exchange; and - b) the Contributor may Distribute the Program under a license different than this Agreement, provided that such license: - i) effectively disclaims on behalf of all other Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose; - ii) effectively excludes on behalf of all other Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits; - iii) does not attempt to limit or alter the recipients' rights in the Source Code under section 3.2; and - iv) requires any subsequent distribution of the Program by any party to be under a license that satisfies the requirements of this section 3. - -3.2 When the Program is Distributed as Source Code: - - a) it must be made available under this Agreement, or if the Program (i) is combined with other material in a separate file or files made available under a Secondary License, and (ii) the initial Contributor attached to the Source Code the notice described in Exhibit A of this Agreement, then the Program may be made available under the terms of such Secondary Licenses, and - b) a copy of this Agreement must be included with each copy of the Program. - -3.3 Contributors may not remove or alter any copyright, patent, trademark, attribution notices, disclaimers of warranty, or limitations of liability (‘notices’) contained within the Program from any copy of the Program which they Distribute, provided that Contributors may add their own appropriate notices. -4. COMMERCIAL DISTRIBUTION - -Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor (“Commercial Contributor”) hereby agrees to defend and indemnify every other Contributor (“Indemnified Contributor”) against any losses, damages and costs (collectively “Losses”) arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense. - -For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages. -5. NO WARRANTY - -EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations. -6. DISCLAIMER OF LIABILITY - -EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. -7. GENERAL - -If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. - -If Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed. - -All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive. - -Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. The Eclipse Foundation is the initial Agreement Steward. The Eclipse Foundation may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be Distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to Distribute the Program (including its Contributions) under the new version. - -Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved. Nothing in this Agreement is intended to be enforceable by any entity that is not a Contributor or Recipient. No third-party beneficiary rights are created under this Agreement. -Exhibit A – Form of Secondary Licenses Notice - -“This Source Code may also be made available under the following Secondary Licenses when the conditions for such availability set forth in the Eclipse Public License, v. 2.0 are satisfied: {name license(s), version(s), and exceptions or additional permissions here}.” - - Simply including a copy of this Agreement, including this Exhibit A is not sufficient to license the Source Code under Secondary Licenses. - - If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. - - You may add additional accurate notices of copyright ownership. - diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/jline-BSD-3-clause b/thirdparty/kafka_2.13-3.1.0/licenses/jline-BSD-3-clause deleted file mode 100644 index 7e11b67..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/jline-BSD-3-clause +++ /dev/null @@ -1,35 +0,0 @@ -Copyright (c) 2002-2018, the original author or authors. -All rights reserved. - -https://opensource.org/licenses/BSD-3-Clause - -Redistribution and use in source and binary forms, with or -without modification, are permitted provided that the following -conditions are met: - -Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - -Redistributions in binary form must reproduce the above copyright -notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with -the distribution. - -Neither the name of JLine nor the names of its contributors -may be used to endorse or promote products derived from this -software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, -BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY -AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO -EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, -OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, -PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED -AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING -IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED -OF THE POSSIBILITY OF SUCH DAMAGE. - diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/jopt-simple-MIT b/thirdparty/kafka_2.13-3.1.0/licenses/jopt-simple-MIT deleted file mode 100644 index 54b2732..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/jopt-simple-MIT +++ /dev/null @@ -1,24 +0,0 @@ -/* - The MIT License - - Copyright (c) 2004-2016 Paul R. Holser, Jr. - - Permission is hereby granted, free of charge, to any person obtaining - a copy of this software and associated documentation files (the - "Software"), to deal in the Software without restriction, including - without limitation the rights to use, copy, modify, merge, publish, - distribute, sublicense, and/or sell copies of the Software, and to - permit persons to whom the Software is furnished to do so, subject to - the following conditions: - - The above copyright notice and this permission notice shall be - included in all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE - LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION - OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -*/ diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/paranamer-BSD-3-clause b/thirdparty/kafka_2.13-3.1.0/licenses/paranamer-BSD-3-clause deleted file mode 100644 index 9eab879..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/paranamer-BSD-3-clause +++ /dev/null @@ -1,29 +0,0 @@ -[ ParaNamer used to be 'Pubic Domain', but since it includes a small piece of ASM it is now the same license as that: BSD ] - - Portions copyright (c) 2006-2018 Paul Hammant & ThoughtWorks Inc - Portions copyright (c) 2000-2007 INRIA, France Telecom - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - 1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - 3. Neither the name of the copyright holders nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE - LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF - THE POSSIBILITY OF SUCH DAMAGE. diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/slf4j-MIT b/thirdparty/kafka_2.13-3.1.0/licenses/slf4j-MIT deleted file mode 100644 index 315bd49..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/slf4j-MIT +++ /dev/null @@ -1,24 +0,0 @@ -Copyright (c) 2004-2017 QOS.ch -All rights reserved. - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - - diff --git a/thirdparty/kafka_2.13-3.1.0/licenses/zstd-jni-BSD-2-clause b/thirdparty/kafka_2.13-3.1.0/licenses/zstd-jni-BSD-2-clause deleted file mode 100644 index 66abb8a..0000000 --- a/thirdparty/kafka_2.13-3.1.0/licenses/zstd-jni-BSD-2-clause +++ /dev/null @@ -1,26 +0,0 @@ -Zstd-jni: JNI bindings to Zstd Library - -Copyright (c) 2015-present, Luben Karavelov/ All rights reserved. - -BSD License - -Redistribution and use in source and binary forms, with or without modification, -are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the documentation and/or - other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/thirdparty/kafka_2.13-3.1.0/logs/controller.log b/thirdparty/kafka_2.13-3.1.0/logs/controller.log deleted file mode 100644 index 0b6239a..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/controller.log +++ /dev/null @@ -1,54 +0,0 @@ -[2022-04-26 22:11:29,320] DEBUG preRegister called. Server=com.sun.jmx.mbeanserver.JmxMBeanServer@5bcea91b, name=log4j:logger=kafka.controller (kafka.controller) -[2022-04-26 22:13:46,618] INFO [ControllerEventThread controllerId=0] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) -[2022-04-26 22:13:46,664] INFO [Controller id=0] 0 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) -[2022-04-26 22:13:46,668] INFO [Controller id=0] Creating FeatureZNode at path: /feature with contents: FeatureZNode(Enabled,Features{}) (kafka.controller.KafkaController) -[2022-04-26 22:13:46,713] INFO [Controller id=0] Registering handlers (kafka.controller.KafkaController) -[2022-04-26 22:13:46,717] INFO [Controller id=0] Deleting log dir event notifications (kafka.controller.KafkaController) -[2022-04-26 22:13:46,720] INFO [Controller id=0] Deleting isr change notifications (kafka.controller.KafkaController) -[2022-04-26 22:13:46,723] INFO [Controller id=0] Initializing controller context (kafka.controller.KafkaController) -[2022-04-26 22:13:46,737] INFO [Controller id=0] Initialized broker epochs cache: HashMap(0 -> 25) (kafka.controller.KafkaController) -[2022-04-26 22:13:46,743] DEBUG [Controller id=0] Register BrokerModifications handler for Set(0) (kafka.controller.KafkaController) -[2022-04-26 22:13:46,748] DEBUG [Channel manager on controller 0]: Controller 0 trying to connect to broker 0 (kafka.controller.ControllerChannelManager) -[2022-04-26 22:13:46,757] INFO [RequestSendThread controllerId=0] Starting (kafka.controller.RequestSendThread) -[2022-04-26 22:13:46,760] INFO [Controller id=0] Currently active brokers in the cluster: Set(0) (kafka.controller.KafkaController) -[2022-04-26 22:13:46,760] INFO [Controller id=0] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) -[2022-04-26 22:13:46,760] INFO [Controller id=0] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) -[2022-04-26 22:13:46,761] INFO [Controller id=0] Fetching topic deletions in progress (kafka.controller.KafkaController) -[2022-04-26 22:13:46,764] INFO [Controller id=0] List of topics to be deleted: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,764] INFO [Controller id=0] List of topics ineligible for deletion: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,765] INFO [Controller id=0] Initializing topic deletion manager (kafka.controller.KafkaController) -[2022-04-26 22:13:46,765] INFO [Topic Deletion Manager 0] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) -[2022-04-26 22:13:46,766] INFO [Controller id=0] Sending update metadata request (kafka.controller.KafkaController) -[2022-04-26 22:13:46,778] INFO [ReplicaStateMachine controllerId=0] Initializing replica state (kafka.controller.ZkReplicaStateMachine) -[2022-04-26 22:13:46,779] INFO [ReplicaStateMachine controllerId=0] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) -[2022-04-26 22:13:46,784] INFO [ReplicaStateMachine controllerId=0] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) -[2022-04-26 22:13:46,784] DEBUG [ReplicaStateMachine controllerId=0] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) -[2022-04-26 22:13:46,785] INFO [PartitionStateMachine controllerId=0] Initializing partition state (kafka.controller.ZkPartitionStateMachine) -[2022-04-26 22:13:46,786] INFO [PartitionStateMachine controllerId=0] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) -[2022-04-26 22:13:46,787] INFO [RequestSendThread controllerId=0] Controller 0 connected to localhost:9092 (id: 0 rack: null) for sending state change requests (kafka.controller.RequestSendThread) -[2022-04-26 22:13:46,789] DEBUG [PartitionStateMachine controllerId=0] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) -[2022-04-26 22:13:46,789] INFO [Controller id=0] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) -[2022-04-26 22:13:46,801] INFO [Controller id=0] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,801] INFO [Controller id=0] Partitions that completed preferred replica election: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,802] INFO [Controller id=0] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,802] INFO [Controller id=0] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) -[2022-04-26 22:13:46,804] INFO [Controller id=0] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) -[2022-04-26 22:13:46,826] INFO [Controller id=0] Starting the controller scheduler (kafka.controller.KafkaController) -[2022-04-26 22:13:51,829] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) -[2022-04-26 22:13:51,830] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) -[2022-04-26 22:17:10,876] INFO [Controller id=0] New topics: [Set(roy-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(roy-topic,Some(S_-AeMmpTxKKSVvRifuAWA),Map(roy-topic-0 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) -[2022-04-26 22:17:10,877] INFO [Controller id=0] New partition creation callback for roy-topic-0 (kafka.controller.KafkaController) -[2022-04-26 22:18:51,841] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) -[2022-04-26 22:18:51,841] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) -[2022-04-26 22:18:51,843] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController) -[2022-04-26 22:18:51,844] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) -[2022-04-26 22:23:26,454] INFO [Controller id=0] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(oijCkFadRXmdKsrjPPD9Yg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=0, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) -[2022-04-26 22:23:26,454] INFO [Controller id=0] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) -[2022-04-26 22:23:51,791] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) -[2022-04-26 22:23:51,797] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) -[2022-04-26 22:23:51,800] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) -[2022-04-26 22:23:51,800] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) -[2022-04-26 22:28:51,803] INFO [Controller id=0] Processing automatic preferred replica leader election (kafka.controller.KafkaController) -[2022-04-26 22:28:51,804] TRACE [Controller id=0] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) -[2022-04-26 22:28:51,805] DEBUG [Controller id=0] Topics not in preferred replica for broker 0 HashMap() (kafka.controller.KafkaController) -[2022-04-26 22:28:51,805] TRACE [Controller id=0] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) diff --git a/thirdparty/kafka_2.13-3.1.0/logs/kafka-authorizer.log b/thirdparty/kafka_2.13-3.1.0/logs/kafka-authorizer.log deleted file mode 100644 index e69de29..0000000 diff --git a/thirdparty/kafka_2.13-3.1.0/logs/kafka-request.log b/thirdparty/kafka_2.13-3.1.0/logs/kafka-request.log deleted file mode 100644 index e69de29..0000000 diff --git a/thirdparty/kafka_2.13-3.1.0/logs/kafkaServer-gc.log b/thirdparty/kafka_2.13-3.1.0/logs/kafkaServer-gc.log deleted file mode 100644 index 7df8629..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/kafkaServer-gc.log +++ /dev/null @@ -1,129 +0,0 @@ -[2022-04-26T22:13:43.363+0900][gc,heap] Heap region size: 1M -[2022-04-26T22:13:43.372+0900][gc ] Using G1 -[2022-04-26T22:13:43.372+0900][gc,heap,coops] Heap address: 0x00000007c0000000, size: 1024 MB, Compressed Oops mode: Zero based, Oop shift amount: 3 -[2022-04-26T22:13:44.593+0900][gc,start ] GC(0) Pause Young (Concurrent Start) (Metadata GC Threshold) -[2022-04-26T22:13:44.593+0900][gc,task ] GC(0) Using 8 workers of 8 for evacuation -[2022-04-26T22:13:44.601+0900][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:13:44.601+0900][gc,phases ] GC(0) Evacuate Collection Set: 7.4ms -[2022-04-26T22:13:44.601+0900][gc,phases ] GC(0) Post Evacuate Collection Set: 0.4ms -[2022-04-26T22:13:44.601+0900][gc,phases ] GC(0) Other: 0.8ms -[2022-04-26T22:13:44.601+0900][gc,heap ] GC(0) Eden regions: 44->0(44) -[2022-04-26T22:13:44.601+0900][gc,heap ] GC(0) Survivor regions: 0->7(7) -[2022-04-26T22:13:44.601+0900][gc,heap ] GC(0) Old regions: 0->0 -[2022-04-26T22:13:44.601+0900][gc,heap ] GC(0) Humongous regions: 0->0 -[2022-04-26T22:13:44.601+0900][gc,metaspace ] GC(0) Metaspace: 20756K->20756K(1069056K) -[2022-04-26T22:13:44.601+0900][gc ] GC(0) Pause Young (Concurrent Start) (Metadata GC Threshold) 44M->6M(1024M) 8.677ms -[2022-04-26T22:13:44.601+0900][gc,cpu ] GC(0) User=0.02s Sys=0.00s Real=0.01s -[2022-04-26T22:13:44.602+0900][gc ] GC(1) Concurrent Cycle -[2022-04-26T22:13:44.602+0900][gc,marking ] GC(1) Concurrent Clear Claimed Marks -[2022-04-26T22:13:44.602+0900][gc,marking ] GC(1) Concurrent Clear Claimed Marks 0.021ms -[2022-04-26T22:13:44.602+0900][gc,marking ] GC(1) Concurrent Scan Root Regions -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Scan Root Regions 2.077ms -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Mark (1.250s) -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Mark From Roots -[2022-04-26T22:13:44.604+0900][gc,task ] GC(1) Using 2 workers of 2 for marking -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Mark From Roots 0.183ms -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Preclean -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Preclean 0.028ms -[2022-04-26T22:13:44.604+0900][gc,marking ] GC(1) Concurrent Mark (1.250s, 1.250s) 0.255ms -[2022-04-26T22:13:44.604+0900][gc,start ] GC(1) Pause Remark -[2022-04-26T22:13:44.606+0900][gc,stringtable] GC(1) Cleaned string and symbol table, strings: 7217 processed, 0 removed, symbols: 63664 processed, 32 removed -[2022-04-26T22:13:44.607+0900][gc ] GC(1) Pause Remark 7M->7M(1024M) 2.440ms -[2022-04-26T22:13:44.607+0900][gc,cpu ] GC(1) User=0.01s Sys=0.00s Real=0.00s -[2022-04-26T22:13:44.607+0900][gc,marking ] GC(1) Concurrent Rebuild Remembered Sets -[2022-04-26T22:13:44.607+0900][gc,marking ] GC(1) Concurrent Rebuild Remembered Sets 0.088ms -[2022-04-26T22:13:44.607+0900][gc,start ] GC(1) Pause Cleanup -[2022-04-26T22:13:44.607+0900][gc ] GC(1) Pause Cleanup 7M->7M(1024M) 0.202ms -[2022-04-26T22:13:44.607+0900][gc,cpu ] GC(1) User=0.00s Sys=0.00s Real=0.00s -[2022-04-26T22:13:44.607+0900][gc,marking ] GC(1) Concurrent Cleanup for Next Mark -[2022-04-26T22:13:44.613+0900][gc,marking ] GC(1) Concurrent Cleanup for Next Mark 6.233ms -[2022-04-26T22:13:44.614+0900][gc ] GC(1) Concurrent Cycle 12.050ms -[2022-04-26T22:13:45.801+0900][gc,start ] GC(2) Pause Young (Normal) (G1 Evacuation Pause) -[2022-04-26T22:13:45.801+0900][gc,task ] GC(2) Using 8 workers of 8 for evacuation -[2022-04-26T22:13:45.809+0900][gc,phases ] GC(2) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:13:45.809+0900][gc,phases ] GC(2) Evacuate Collection Set: 6.8ms -[2022-04-26T22:13:45.809+0900][gc,phases ] GC(2) Post Evacuate Collection Set: 0.6ms -[2022-04-26T22:13:45.809+0900][gc,phases ] GC(2) Other: 0.1ms -[2022-04-26T22:13:45.809+0900][gc,heap ] GC(2) Eden regions: 44->0(48) -[2022-04-26T22:13:45.809+0900][gc,heap ] GC(2) Survivor regions: 7->3(7) -[2022-04-26T22:13:45.809+0900][gc,heap ] GC(2) Old regions: 0->7 -[2022-04-26T22:13:45.809+0900][gc,heap ] GC(2) Humongous regions: 129->129 -[2022-04-26T22:13:45.809+0900][gc,metaspace ] GC(2) Metaspace: 30183K->30183K(1077248K) -[2022-04-26T22:13:45.809+0900][gc ] GC(2) Pause Young (Normal) (G1 Evacuation Pause) 179M->138M(1024M) 7.740ms -[2022-04-26T22:13:45.809+0900][gc,cpu ] GC(2) User=0.04s Sys=0.01s Real=0.00s -[2022-04-26T22:13:46.211+0900][gc,start ] GC(3) Pause Young (Concurrent Start) (Metadata GC Threshold) -[2022-04-26T22:13:46.211+0900][gc,task ] GC(3) Using 8 workers of 8 for evacuation -[2022-04-26T22:13:46.214+0900][gc,phases ] GC(3) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:13:46.214+0900][gc,phases ] GC(3) Evacuate Collection Set: 3.0ms -[2022-04-26T22:13:46.214+0900][gc,phases ] GC(3) Post Evacuate Collection Set: 0.3ms -[2022-04-26T22:13:46.214+0900][gc,phases ] GC(3) Other: 0.2ms -[2022-04-26T22:13:46.214+0900][gc,heap ] GC(3) Eden regions: 25->0(48) -[2022-04-26T22:13:46.214+0900][gc,heap ] GC(3) Survivor regions: 3->3(7) -[2022-04-26T22:13:46.214+0900][gc,heap ] GC(3) Old regions: 7->7 -[2022-04-26T22:13:46.214+0900][gc,heap ] GC(3) Humongous regions: 129->129 -[2022-04-26T22:13:46.214+0900][gc,metaspace ] GC(3) Metaspace: 35153K->35153K(1081344K) -[2022-04-26T22:13:46.214+0900][gc ] GC(3) Pause Young (Concurrent Start) (Metadata GC Threshold) 162M->138M(1024M) 3.584ms -[2022-04-26T22:13:46.214+0900][gc,cpu ] GC(3) User=0.01s Sys=0.00s Real=0.00s -[2022-04-26T22:13:46.214+0900][gc ] GC(4) Concurrent Cycle -[2022-04-26T22:13:46.214+0900][gc,marking ] GC(4) Concurrent Clear Claimed Marks -[2022-04-26T22:13:46.214+0900][gc,marking ] GC(4) Concurrent Clear Claimed Marks 0.034ms -[2022-04-26T22:13:46.215+0900][gc,marking ] GC(4) Concurrent Scan Root Regions -[2022-04-26T22:13:46.216+0900][gc,marking ] GC(4) Concurrent Scan Root Regions 1.399ms -[2022-04-26T22:13:46.216+0900][gc,marking ] GC(4) Concurrent Mark (2.862s) -[2022-04-26T22:13:46.216+0900][gc,marking ] GC(4) Concurrent Mark From Roots -[2022-04-26T22:13:46.216+0900][gc,task ] GC(4) Using 2 workers of 2 for marking -[2022-04-26T22:13:46.220+0900][gc,marking ] GC(4) Concurrent Mark From Roots 4.500ms -[2022-04-26T22:13:46.221+0900][gc,marking ] GC(4) Concurrent Preclean -[2022-04-26T22:13:46.221+0900][gc,marking ] GC(4) Concurrent Preclean 0.093ms -[2022-04-26T22:13:46.221+0900][gc,marking ] GC(4) Concurrent Mark (2.862s, 2.866s) 4.679ms -[2022-04-26T22:13:46.221+0900][gc,start ] GC(4) Pause Remark -[2022-04-26T22:13:46.223+0900][gc,stringtable] GC(4) Cleaned string and symbol table, strings: 11115 processed, 7 removed, symbols: 96957 processed, 26 removed -[2022-04-26T22:13:46.224+0900][gc ] GC(4) Pause Remark 139M->139M(1024M) 2.959ms -[2022-04-26T22:13:46.224+0900][gc,cpu ] GC(4) User=0.01s Sys=0.00s Real=0.00s -[2022-04-26T22:13:46.224+0900][gc,marking ] GC(4) Concurrent Rebuild Remembered Sets -[2022-04-26T22:13:46.227+0900][gc,marking ] GC(4) Concurrent Rebuild Remembered Sets 3.321ms -[2022-04-26T22:13:46.227+0900][gc,start ] GC(4) Pause Cleanup -[2022-04-26T22:13:46.227+0900][gc ] GC(4) Pause Cleanup 139M->139M(1024M) 0.178ms -[2022-04-26T22:13:46.228+0900][gc,cpu ] GC(4) User=0.00s Sys=0.00s Real=0.00s -[2022-04-26T22:13:46.228+0900][gc,marking ] GC(4) Concurrent Cleanup for Next Mark -[2022-04-26T22:13:46.234+0900][gc,marking ] GC(4) Concurrent Cleanup for Next Mark 6.322ms -[2022-04-26T22:13:46.234+0900][gc ] GC(4) Concurrent Cycle 19.521ms -[2022-04-26T22:14:15.614+0900][gc,start ] GC(5) Pause Young (Normal) (G1 Evacuation Pause) -[2022-04-26T22:14:15.614+0900][gc,task ] GC(5) Using 8 workers of 8 for evacuation -[2022-04-26T22:14:15.620+0900][gc,phases ] GC(5) Pre Evacuate Collection Set: 0.1ms -[2022-04-26T22:14:15.620+0900][gc,phases ] GC(5) Evacuate Collection Set: 5.0ms -[2022-04-26T22:14:15.620+0900][gc,phases ] GC(5) Post Evacuate Collection Set: 0.5ms -[2022-04-26T22:14:15.620+0900][gc,phases ] GC(5) Other: 0.3ms -[2022-04-26T22:14:15.620+0900][gc,heap ] GC(5) Eden regions: 48->0(44) -[2022-04-26T22:14:15.620+0900][gc,heap ] GC(5) Survivor regions: 3->7(7) -[2022-04-26T22:14:15.620+0900][gc,heap ] GC(5) Old regions: 7->7 -[2022-04-26T22:14:15.620+0900][gc,heap ] GC(5) Humongous regions: 129->129 -[2022-04-26T22:14:15.620+0900][gc,metaspace ] GC(5) Metaspace: 43644K->43644K(1089536K) -[2022-04-26T22:14:15.620+0900][gc ] GC(5) Pause Young (Normal) (G1 Evacuation Pause) 186M->142M(1024M) 5.961ms -[2022-04-26T22:14:15.620+0900][gc,cpu ] GC(5) User=0.04s Sys=0.01s Real=0.01s -[2022-04-26T22:23:28.271+0900][gc,start ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) -[2022-04-26T22:23:28.271+0900][gc,task ] GC(6) Using 8 workers of 8 for evacuation -[2022-04-26T22:23:28.279+0900][gc,phases ] GC(6) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:23:28.279+0900][gc,phases ] GC(6) Evacuate Collection Set: 6.9ms -[2022-04-26T22:23:28.279+0900][gc,phases ] GC(6) Post Evacuate Collection Set: 0.8ms -[2022-04-26T22:23:28.279+0900][gc,phases ] GC(6) Other: 0.2ms -[2022-04-26T22:23:28.279+0900][gc,heap ] GC(6) Eden regions: 44->0(48) -[2022-04-26T22:23:28.279+0900][gc,heap ] GC(6) Survivor regions: 7->3(7) -[2022-04-26T22:23:28.279+0900][gc,heap ] GC(6) Old regions: 7->14 -[2022-04-26T22:23:28.279+0900][gc,heap ] GC(6) Humongous regions: 129->129 -[2022-04-26T22:23:28.279+0900][gc,metaspace ] GC(6) Metaspace: 49030K->49030K(1093632K) -[2022-04-26T22:23:28.279+0900][gc ] GC(6) Pause Young (Normal) (G1 Evacuation Pause) 186M->144M(1024M) 8.130ms -[2022-04-26T22:23:28.279+0900][gc,cpu ] GC(6) User=0.05s Sys=0.01s Real=0.00s -[2022-04-26T22:30:54.551+0900][gc,start ] GC(7) Pause Young (Normal) (G1 Evacuation Pause) -[2022-04-26T22:30:54.552+0900][gc,task ] GC(7) Using 8 workers of 8 for evacuation -[2022-04-26T22:30:54.558+0900][gc,phases ] GC(7) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:30:54.558+0900][gc,phases ] GC(7) Evacuate Collection Set: 5.5ms -[2022-04-26T22:30:54.558+0900][gc,phases ] GC(7) Post Evacuate Collection Set: 0.7ms -[2022-04-26T22:30:54.558+0900][gc,phases ] GC(7) Other: 0.4ms -[2022-04-26T22:30:54.558+0900][gc,heap ] GC(7) Eden regions: 48->0(45) -[2022-04-26T22:30:54.558+0900][gc,heap ] GC(7) Survivor regions: 3->6(7) -[2022-04-26T22:30:54.558+0900][gc,heap ] GC(7) Old regions: 14->14 -[2022-04-26T22:30:54.558+0900][gc,heap ] GC(7) Humongous regions: 129->129 -[2022-04-26T22:30:54.558+0900][gc,metaspace ] GC(7) Metaspace: 51505K->51505K(1095680K) -[2022-04-26T22:30:54.558+0900][gc ] GC(7) Pause Young (Normal) (G1 Evacuation Pause) 192M->147M(1024M) 6.739ms -[2022-04-26T22:30:54.558+0900][gc,cpu ] GC(7) User=0.04s Sys=0.00s Real=0.00s diff --git a/thirdparty/kafka_2.13-3.1.0/logs/log-cleaner.log b/thirdparty/kafka_2.13-3.1.0/logs/log-cleaner.log deleted file mode 100644 index 35a4227..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/log-cleaner.log +++ /dev/null @@ -1,2 +0,0 @@ -[2022-04-26 22:13:45,623] INFO Starting the log cleaner (kafka.log.LogCleaner) -[2022-04-26 22:13:45,721] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner) diff --git a/thirdparty/kafka_2.13-3.1.0/logs/server.log b/thirdparty/kafka_2.13-3.1.0/logs/server.log deleted file mode 100644 index 0e40336..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/server.log +++ /dev/null @@ -1,1083 +0,0 @@ -[2022-04-26 22:11:29,285] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,300] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,300] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,300] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,301] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,304] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) -[2022-04-26 22:11:29,304] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) -[2022-04-26 22:11:29,305] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) -[2022-04-26 22:11:29,305] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) -[2022-04-26 22:11:29,310] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil) -[2022-04-26 22:11:29,324] INFO Reading configuration from: ./config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,324] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,324] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,324] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,324] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) -[2022-04-26 22:11:29,324] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) -[2022-04-26 22:11:29,341] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@4d910fd6 (org.apache.zookeeper.server.ServerMetrics) -[2022-04-26 22:11:29,347] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) -[2022-04-26 22:11:29,365] INFO (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,365] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,365] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,365] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,365] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,365] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,366] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,366] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,366] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,366] INFO (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,369] INFO Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,369] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,369] INFO Server environment:java.version=11.0.11 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,369] INFO Server environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,369] INFO Server environment:java.home=/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,370] INFO Server environment:java.class.path=/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/activation-1.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/argparse4j-0.7.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/commons-cli-1.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-basic-auth-extension-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-file-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-json-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-mirror-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-mirror-client-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-runtime-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-transforms-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-api-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-annotations-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-core-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-databind-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-dataformat-csv-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-datatype-jdk8-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-jaxrs-base-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-jaxrs-json-provider-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-module-jaxb-annotations-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-module-scala_2.13-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-client-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-common-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-container-servlet-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-container-servlet-core-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-hk2-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-server-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-client-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-continuation-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-http-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-io-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-security-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-server-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-servlet-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-servlets-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-util-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-util-ajax-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jline-3.12.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jose4j-0.7.8.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-clients-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-log4j-appender-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-metadata-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-raft-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-server-common-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-shell-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-storage-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-storage-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-examples-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-scala_2.13-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-test-utils-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-tools-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka_2.13-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/log4j-1.2.17.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/lz4-java-1.8.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/maven-artifact-3.8.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/metrics-core-2.2.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/metrics-core-4.1.12.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-buffer-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-codec-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-common-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-handler-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-resolver-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-native-epoll-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-native-unix-common-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/paranamer-2.8.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/reflections-0.9.12.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/rocksdbjni-6.22.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-collection-compat_2.13-2.4.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-java8-compat_2.13-1.0.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-library-2.13.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-logging_2.13-3.9.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-reflect-2.13.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/snappy-java-1.1.8.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/trogdor-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zookeeper-3.6.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zookeeper-jute-3.6.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zstd-jni-1.5.0-4.jar (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,370] INFO Server environment:java.library.path=/Users/roy/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,370] INFO Server environment:java.io.tmpdir=/var/folders/yd/nz_t0wjs2c1gv9lc2x0d9m6w0000gn/T/ (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.name=Mac OS X (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.arch=x86_64 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.version=11.1 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:user.name=roy (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:user.home=/Users/roy (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:user.dir=/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO zookeeper.flushDelay=0 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,371] INFO zookeeper.maxWriteQueuePollTime=0 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,372] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,372] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,373] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) -[2022-04-26 22:11:29,375] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,375] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,377] INFO Response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) -[2022-04-26 22:11:29,377] INFO Response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) -[2022-04-26 22:11:29,379] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,379] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,379] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,379] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,380] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,380] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) -[2022-04-26 22:11:29,384] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,384] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,384] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,399] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) -[2022-04-26 22:11:29,401] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) -[2022-04-26 22:11:29,403] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) -[2022-04-26 22:11:29,412] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) -[2022-04-26 22:11:29,445] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) -[2022-04-26 22:11:29,445] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) -[2022-04-26 22:11:29,446] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) -[2022-04-26 22:11:29,447] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) -[2022-04-26 22:11:29,455] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) -[2022-04-26 22:11:29,455] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) -[2022-04-26 22:11:29,462] INFO Snapshot loaded in 15 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) -[2022-04-26 22:11:29,462] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) -[2022-04-26 22:11:29,463] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) -[2022-04-26 22:11:29,476] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) -[2022-04-26 22:11:29,477] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler) -[2022-04-26 22:11:29,502] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) -[2022-04-26 22:11:29,515] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) -[2022-04-26 22:13:44,075] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) -[2022-04-26 22:13:44,556] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) -[2022-04-26 22:13:44,695] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) -[2022-04-26 22:13:44,699] INFO starting (kafka.server.KafkaServer) -[2022-04-26 22:13:44,699] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) -[2022-04-26 22:13:44,722] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient) -[2022-04-26 22:13:44,730] INFO Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,730] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,730] INFO Client environment:java.version=11.0.11 (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,731] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,731] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,731] INFO Client environment:java.class.path=/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/activation-1.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/argparse4j-0.7.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/commons-cli-1.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-basic-auth-extension-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-file-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-json-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-mirror-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-mirror-client-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-runtime-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/connect-transforms-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-api-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-annotations-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-core-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-databind-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-dataformat-csv-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-datatype-jdk8-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-jaxrs-base-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-jaxrs-json-provider-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-module-jaxb-annotations-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jackson-module-scala_2.13-2.12.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-client-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-common-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-container-servlet-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-container-servlet-core-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-hk2-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jersey-server-2.34.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-client-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-continuation-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-http-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-io-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-security-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-server-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-servlet-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-servlets-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-util-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jetty-util-ajax-9.4.43.v20210629.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jline-3.12.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/jose4j-0.7.8.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-clients-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-log4j-appender-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-metadata-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-raft-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-server-common-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-shell-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-storage-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-storage-api-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-examples-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-scala_2.13-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-streams-test-utils-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka-tools-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/kafka_2.13-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/log4j-1.2.17.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/lz4-java-1.8.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/maven-artifact-3.8.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/metrics-core-2.2.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/metrics-core-4.1.12.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-buffer-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-codec-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-common-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-handler-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-resolver-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-native-epoll-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/netty-transport-native-unix-common-4.1.68.Final.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/paranamer-2.8.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/reflections-0.9.12.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/rocksdbjni-6.22.1.1.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-collection-compat_2.13-2.4.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-java8-compat_2.13-1.0.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-library-2.13.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-logging_2.13-3.9.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/scala-reflect-2.13.6.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/snappy-java-1.1.8.4.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/trogdor-3.1.0.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zookeeper-3.6.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zookeeper-jute-3.6.3.jar:/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0/bin/../libs/zstd-jni-1.5.0-4.jar (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:java.library.path=/Users/roy/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:java.io.tmpdir=/var/folders/yd/nz_t0wjs2c1gv9lc2x0d9m6w0000gn/T/ (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:os.version=11.1 (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:user.name=roy (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:user.home=/Users/roy (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,732] INFO Client environment:user.dir=/Users/roy/Desktop/my-project/spring-cloud/thirdparty/kafka_2.13-3.1.0 (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,733] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,733] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,733] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,737] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@54dcfa5a (org.apache.zookeeper.ZooKeeper) -[2022-04-26 22:13:44,747] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) -[2022-04-26 22:13:44,758] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) -[2022-04-26 22:13:44,761] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) -[2022-04-26 22:13:44,767] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn) -[2022-04-26 22:13:44,768] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) -[2022-04-26 22:13:44,779] INFO Socket connection established, initiating session, client: /127.0.0.1:64687, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn) -[2022-04-26 22:13:44,792] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) -[2022-04-26 22:13:44,820] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x1007a1b2f320000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) -[2022-04-26 22:13:44,825] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) -[2022-04-26 22:13:45,145] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) -[2022-04-26 22:13:45,160] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) -[2022-04-26 22:13:45,160] INFO Cleared cache (kafka.server.FinalizedFeatureCache) -[2022-04-26 22:13:45,421] INFO Cluster ID = xjOUDJoLRluCflhzMj7GCw (kafka.server.KafkaServer) -[2022-04-26 22:13:45,429] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) -[2022-04-26 22:13:45,474] INFO KafkaConfig values: - advertised.listeners = null - alter.config.policy.class.name = null - alter.log.dirs.replication.quota.window.num = 11 - alter.log.dirs.replication.quota.window.size.seconds = 1 - authorizer.class.name = - auto.create.topics.enable = true - auto.leader.rebalance.enable = true - background.threads = 10 - broker.heartbeat.interval.ms = 2000 - broker.id = 0 - broker.id.generation.enable = true - broker.rack = null - broker.session.timeout.ms = 9000 - client.quota.callback.class = null - compression.type = producer - connection.failed.authentication.delay.ms = 100 - connections.max.idle.ms = 600000 - connections.max.reauth.ms = 0 - control.plane.listener.name = null - controlled.shutdown.enable = true - controlled.shutdown.max.retries = 3 - controlled.shutdown.retry.backoff.ms = 5000 - controller.listener.names = null - controller.quorum.append.linger.ms = 25 - controller.quorum.election.backoff.max.ms = 1000 - controller.quorum.election.timeout.ms = 1000 - controller.quorum.fetch.timeout.ms = 2000 - controller.quorum.request.timeout.ms = 2000 - controller.quorum.retry.backoff.ms = 20 - controller.quorum.voters = [] - controller.quota.window.num = 11 - controller.quota.window.size.seconds = 1 - controller.socket.timeout.ms = 30000 - create.topic.policy.class.name = null - default.replication.factor = 1 - delegation.token.expiry.check.interval.ms = 3600000 - delegation.token.expiry.time.ms = 86400000 - delegation.token.master.key = null - delegation.token.max.lifetime.ms = 604800000 - delegation.token.secret.key = null - delete.records.purgatory.purge.interval.requests = 1 - delete.topic.enable = true - fetch.max.bytes = 57671680 - fetch.purgatory.purge.interval.requests = 1000 - group.initial.rebalance.delay.ms = 0 - group.max.session.timeout.ms = 1800000 - group.max.size = 2147483647 - group.min.session.timeout.ms = 6000 - initial.broker.registration.timeout.ms = 60000 - inter.broker.listener.name = null - inter.broker.protocol.version = 3.1-IV0 - kafka.metrics.polling.interval.secs = 10 - kafka.metrics.reporters = [] - leader.imbalance.check.interval.seconds = 300 - leader.imbalance.per.broker.percentage = 10 - listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - listeners = PLAINTEXT://:9092 - log.cleaner.backoff.ms = 15000 - log.cleaner.dedupe.buffer.size = 134217728 - log.cleaner.delete.retention.ms = 86400000 - log.cleaner.enable = true - log.cleaner.io.buffer.load.factor = 0.9 - log.cleaner.io.buffer.size = 524288 - log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 - log.cleaner.max.compaction.lag.ms = 9223372036854775807 - log.cleaner.min.cleanable.ratio = 0.5 - log.cleaner.min.compaction.lag.ms = 0 - log.cleaner.threads = 1 - log.cleanup.policy = [delete] - log.dir = /tmp/kafka-logs - log.dirs = /tmp/kafka-logs - log.flush.interval.messages = 9223372036854775807 - log.flush.interval.ms = null - log.flush.offset.checkpoint.interval.ms = 60000 - log.flush.scheduler.interval.ms = 9223372036854775807 - log.flush.start.offset.checkpoint.interval.ms = 60000 - log.index.interval.bytes = 4096 - log.index.size.max.bytes = 10485760 - log.message.downconversion.enable = true - log.message.format.version = 3.0-IV1 - log.message.timestamp.difference.max.ms = 9223372036854775807 - log.message.timestamp.type = CreateTime - log.preallocate = false - log.retention.bytes = -1 - log.retention.check.interval.ms = 300000 - log.retention.hours = 168 - log.retention.minutes = null - log.retention.ms = null - log.roll.hours = 168 - log.roll.jitter.hours = 0 - log.roll.jitter.ms = null - log.roll.ms = null - log.segment.bytes = 1073741824 - log.segment.delete.delay.ms = 60000 - max.connection.creation.rate = 2147483647 - max.connections = 2147483647 - max.connections.per.ip = 2147483647 - max.connections.per.ip.overrides = - max.incremental.fetch.session.cache.slots = 1000 - message.max.bytes = 1048588 - metadata.log.dir = null - metadata.log.max.record.bytes.between.snapshots = 20971520 - metadata.log.segment.bytes = 1073741824 - metadata.log.segment.min.bytes = 8388608 - metadata.log.segment.ms = 604800000 - metadata.max.retention.bytes = -1 - metadata.max.retention.ms = 604800000 - metric.reporters = [] - metrics.num.samples = 2 - metrics.recording.level = INFO - metrics.sample.window.ms = 30000 - min.insync.replicas = 1 - node.id = 0 - num.io.threads = 8 - num.network.threads = 3 - num.partitions = 1 - num.recovery.threads.per.data.dir = 1 - num.replica.alter.log.dirs.threads = null - num.replica.fetchers = 1 - offset.metadata.max.bytes = 4096 - offsets.commit.required.acks = -1 - offsets.commit.timeout.ms = 5000 - offsets.load.buffer.size = 5242880 - offsets.retention.check.interval.ms = 600000 - offsets.retention.minutes = 10080 - offsets.topic.compression.codec = 0 - offsets.topic.num.partitions = 50 - offsets.topic.replication.factor = 1 - offsets.topic.segment.bytes = 104857600 - password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding - password.encoder.iterations = 4096 - password.encoder.key.length = 128 - password.encoder.keyfactory.algorithm = null - password.encoder.old.secret = null - password.encoder.secret = null - principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder - process.roles = [] - producer.purgatory.purge.interval.requests = 1000 - queued.max.request.bytes = -1 - queued.max.requests = 500 - quota.window.num = 11 - quota.window.size.seconds = 1 - remote.log.index.file.cache.total.size.bytes = 1073741824 - remote.log.manager.task.interval.ms = 30000 - remote.log.manager.task.retry.backoff.max.ms = 30000 - remote.log.manager.task.retry.backoff.ms = 500 - remote.log.manager.task.retry.jitter = 0.2 - remote.log.manager.thread.pool.size = 10 - remote.log.metadata.manager.class.name = null - remote.log.metadata.manager.class.path = null - remote.log.metadata.manager.impl.prefix = null - remote.log.metadata.manager.listener.name = null - remote.log.reader.max.pending.tasks = 100 - remote.log.reader.threads = 10 - remote.log.storage.manager.class.name = null - remote.log.storage.manager.class.path = null - remote.log.storage.manager.impl.prefix = null - remote.log.storage.system.enable = false - replica.fetch.backoff.ms = 1000 - replica.fetch.max.bytes = 1048576 - replica.fetch.min.bytes = 1 - replica.fetch.response.max.bytes = 10485760 - replica.fetch.wait.max.ms = 500 - replica.high.watermark.checkpoint.interval.ms = 5000 - replica.lag.time.max.ms = 30000 - replica.selector.class = null - replica.socket.receive.buffer.bytes = 65536 - replica.socket.timeout.ms = 30000 - replication.quota.window.num = 11 - replication.quota.window.size.seconds = 1 - request.timeout.ms = 30000 - reserved.broker.max.id = 1000 - sasl.client.callback.handler.class = null - sasl.enabled.mechanisms = [GSSAPI] - sasl.jaas.config = null - sasl.kerberos.kinit.cmd = /usr/bin/kinit - sasl.kerberos.min.time.before.relogin = 60000 - sasl.kerberos.principal.to.local.rules = [DEFAULT] - sasl.kerberos.service.name = null - sasl.kerberos.ticket.renew.jitter = 0.05 - sasl.kerberos.ticket.renew.window.factor = 0.8 - sasl.login.callback.handler.class = null - sasl.login.class = null - sasl.login.connect.timeout.ms = null - sasl.login.read.timeout.ms = null - sasl.login.refresh.buffer.seconds = 300 - sasl.login.refresh.min.period.seconds = 60 - sasl.login.refresh.window.factor = 0.8 - sasl.login.refresh.window.jitter = 0.05 - sasl.login.retry.backoff.max.ms = 10000 - sasl.login.retry.backoff.ms = 100 - sasl.mechanism.controller.protocol = GSSAPI - sasl.mechanism.inter.broker.protocol = GSSAPI - sasl.oauthbearer.clock.skew.seconds = 30 - sasl.oauthbearer.expected.audience = null - sasl.oauthbearer.expected.issuer = null - sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 - sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 - sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 - sasl.oauthbearer.jwks.endpoint.url = null - sasl.oauthbearer.scope.claim.name = scope - sasl.oauthbearer.sub.claim.name = sub - sasl.oauthbearer.token.endpoint.url = null - sasl.server.callback.handler.class = null - security.inter.broker.protocol = PLAINTEXT - security.providers = null - socket.connection.setup.timeout.max.ms = 30000 - socket.connection.setup.timeout.ms = 10000 - socket.receive.buffer.bytes = 102400 - socket.request.max.bytes = 104857600 - socket.send.buffer.bytes = 102400 - ssl.cipher.suites = [] - ssl.client.auth = none - ssl.enabled.protocols = [TLSv1.2, TLSv1.3] - ssl.endpoint.identification.algorithm = https - ssl.engine.factory.class = null - ssl.key.password = null - ssl.keymanager.algorithm = SunX509 - ssl.keystore.certificate.chain = null - ssl.keystore.key = null - ssl.keystore.location = null - ssl.keystore.password = null - ssl.keystore.type = JKS - ssl.principal.mapping.rules = DEFAULT - ssl.protocol = TLSv1.3 - ssl.provider = null - ssl.secure.random.implementation = null - ssl.trustmanager.algorithm = PKIX - ssl.truststore.certificates = null - ssl.truststore.location = null - ssl.truststore.password = null - ssl.truststore.type = JKS - transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 - transaction.max.timeout.ms = 900000 - transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 - transaction.state.log.load.buffer.size = 5242880 - transaction.state.log.min.isr = 1 - transaction.state.log.num.partitions = 50 - transaction.state.log.replication.factor = 1 - transaction.state.log.segment.bytes = 104857600 - transactional.id.expiration.ms = 604800000 - unclean.leader.election.enable = false - zookeeper.clientCnxnSocket = null - zookeeper.connect = localhost:2181 - zookeeper.connection.timeout.ms = 18000 - zookeeper.max.in.flight.requests = 10 - zookeeper.session.timeout.ms = 18000 - zookeeper.set.acl = false - zookeeper.ssl.cipher.suites = null - zookeeper.ssl.client.enable = false - zookeeper.ssl.crl.enable = false - zookeeper.ssl.enabled.protocols = null - zookeeper.ssl.endpoint.identification.algorithm = HTTPS - zookeeper.ssl.keystore.location = null - zookeeper.ssl.keystore.password = null - zookeeper.ssl.keystore.type = null - zookeeper.ssl.ocsp.enable = false - zookeeper.ssl.protocol = TLSv1.2 - zookeeper.ssl.truststore.location = null - zookeeper.ssl.truststore.password = null - zookeeper.ssl.truststore.type = null - zookeeper.sync.time.ms = 2000 - (kafka.server.KafkaConfig) -[2022-04-26 22:13:45,482] INFO KafkaConfig values: - advertised.listeners = null - alter.config.policy.class.name = null - alter.log.dirs.replication.quota.window.num = 11 - alter.log.dirs.replication.quota.window.size.seconds = 1 - authorizer.class.name = - auto.create.topics.enable = true - auto.leader.rebalance.enable = true - background.threads = 10 - broker.heartbeat.interval.ms = 2000 - broker.id = 0 - broker.id.generation.enable = true - broker.rack = null - broker.session.timeout.ms = 9000 - client.quota.callback.class = null - compression.type = producer - connection.failed.authentication.delay.ms = 100 - connections.max.idle.ms = 600000 - connections.max.reauth.ms = 0 - control.plane.listener.name = null - controlled.shutdown.enable = true - controlled.shutdown.max.retries = 3 - controlled.shutdown.retry.backoff.ms = 5000 - controller.listener.names = null - controller.quorum.append.linger.ms = 25 - controller.quorum.election.backoff.max.ms = 1000 - controller.quorum.election.timeout.ms = 1000 - controller.quorum.fetch.timeout.ms = 2000 - controller.quorum.request.timeout.ms = 2000 - controller.quorum.retry.backoff.ms = 20 - controller.quorum.voters = [] - controller.quota.window.num = 11 - controller.quota.window.size.seconds = 1 - controller.socket.timeout.ms = 30000 - create.topic.policy.class.name = null - default.replication.factor = 1 - delegation.token.expiry.check.interval.ms = 3600000 - delegation.token.expiry.time.ms = 86400000 - delegation.token.master.key = null - delegation.token.max.lifetime.ms = 604800000 - delegation.token.secret.key = null - delete.records.purgatory.purge.interval.requests = 1 - delete.topic.enable = true - fetch.max.bytes = 57671680 - fetch.purgatory.purge.interval.requests = 1000 - group.initial.rebalance.delay.ms = 0 - group.max.session.timeout.ms = 1800000 - group.max.size = 2147483647 - group.min.session.timeout.ms = 6000 - initial.broker.registration.timeout.ms = 60000 - inter.broker.listener.name = null - inter.broker.protocol.version = 3.1-IV0 - kafka.metrics.polling.interval.secs = 10 - kafka.metrics.reporters = [] - leader.imbalance.check.interval.seconds = 300 - leader.imbalance.per.broker.percentage = 10 - listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - listeners = PLAINTEXT://:9092 - log.cleaner.backoff.ms = 15000 - log.cleaner.dedupe.buffer.size = 134217728 - log.cleaner.delete.retention.ms = 86400000 - log.cleaner.enable = true - log.cleaner.io.buffer.load.factor = 0.9 - log.cleaner.io.buffer.size = 524288 - log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 - log.cleaner.max.compaction.lag.ms = 9223372036854775807 - log.cleaner.min.cleanable.ratio = 0.5 - log.cleaner.min.compaction.lag.ms = 0 - log.cleaner.threads = 1 - log.cleanup.policy = [delete] - log.dir = /tmp/kafka-logs - log.dirs = /tmp/kafka-logs - log.flush.interval.messages = 9223372036854775807 - log.flush.interval.ms = null - log.flush.offset.checkpoint.interval.ms = 60000 - log.flush.scheduler.interval.ms = 9223372036854775807 - log.flush.start.offset.checkpoint.interval.ms = 60000 - log.index.interval.bytes = 4096 - log.index.size.max.bytes = 10485760 - log.message.downconversion.enable = true - log.message.format.version = 3.0-IV1 - log.message.timestamp.difference.max.ms = 9223372036854775807 - log.message.timestamp.type = CreateTime - log.preallocate = false - log.retention.bytes = -1 - log.retention.check.interval.ms = 300000 - log.retention.hours = 168 - log.retention.minutes = null - log.retention.ms = null - log.roll.hours = 168 - log.roll.jitter.hours = 0 - log.roll.jitter.ms = null - log.roll.ms = null - log.segment.bytes = 1073741824 - log.segment.delete.delay.ms = 60000 - max.connection.creation.rate = 2147483647 - max.connections = 2147483647 - max.connections.per.ip = 2147483647 - max.connections.per.ip.overrides = - max.incremental.fetch.session.cache.slots = 1000 - message.max.bytes = 1048588 - metadata.log.dir = null - metadata.log.max.record.bytes.between.snapshots = 20971520 - metadata.log.segment.bytes = 1073741824 - metadata.log.segment.min.bytes = 8388608 - metadata.log.segment.ms = 604800000 - metadata.max.retention.bytes = -1 - metadata.max.retention.ms = 604800000 - metric.reporters = [] - metrics.num.samples = 2 - metrics.recording.level = INFO - metrics.sample.window.ms = 30000 - min.insync.replicas = 1 - node.id = 0 - num.io.threads = 8 - num.network.threads = 3 - num.partitions = 1 - num.recovery.threads.per.data.dir = 1 - num.replica.alter.log.dirs.threads = null - num.replica.fetchers = 1 - offset.metadata.max.bytes = 4096 - offsets.commit.required.acks = -1 - offsets.commit.timeout.ms = 5000 - offsets.load.buffer.size = 5242880 - offsets.retention.check.interval.ms = 600000 - offsets.retention.minutes = 10080 - offsets.topic.compression.codec = 0 - offsets.topic.num.partitions = 50 - offsets.topic.replication.factor = 1 - offsets.topic.segment.bytes = 104857600 - password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding - password.encoder.iterations = 4096 - password.encoder.key.length = 128 - password.encoder.keyfactory.algorithm = null - password.encoder.old.secret = null - password.encoder.secret = null - principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder - process.roles = [] - producer.purgatory.purge.interval.requests = 1000 - queued.max.request.bytes = -1 - queued.max.requests = 500 - quota.window.num = 11 - quota.window.size.seconds = 1 - remote.log.index.file.cache.total.size.bytes = 1073741824 - remote.log.manager.task.interval.ms = 30000 - remote.log.manager.task.retry.backoff.max.ms = 30000 - remote.log.manager.task.retry.backoff.ms = 500 - remote.log.manager.task.retry.jitter = 0.2 - remote.log.manager.thread.pool.size = 10 - remote.log.metadata.manager.class.name = null - remote.log.metadata.manager.class.path = null - remote.log.metadata.manager.impl.prefix = null - remote.log.metadata.manager.listener.name = null - remote.log.reader.max.pending.tasks = 100 - remote.log.reader.threads = 10 - remote.log.storage.manager.class.name = null - remote.log.storage.manager.class.path = null - remote.log.storage.manager.impl.prefix = null - remote.log.storage.system.enable = false - replica.fetch.backoff.ms = 1000 - replica.fetch.max.bytes = 1048576 - replica.fetch.min.bytes = 1 - replica.fetch.response.max.bytes = 10485760 - replica.fetch.wait.max.ms = 500 - replica.high.watermark.checkpoint.interval.ms = 5000 - replica.lag.time.max.ms = 30000 - replica.selector.class = null - replica.socket.receive.buffer.bytes = 65536 - replica.socket.timeout.ms = 30000 - replication.quota.window.num = 11 - replication.quota.window.size.seconds = 1 - request.timeout.ms = 30000 - reserved.broker.max.id = 1000 - sasl.client.callback.handler.class = null - sasl.enabled.mechanisms = [GSSAPI] - sasl.jaas.config = null - sasl.kerberos.kinit.cmd = /usr/bin/kinit - sasl.kerberos.min.time.before.relogin = 60000 - sasl.kerberos.principal.to.local.rules = [DEFAULT] - sasl.kerberos.service.name = null - sasl.kerberos.ticket.renew.jitter = 0.05 - sasl.kerberos.ticket.renew.window.factor = 0.8 - sasl.login.callback.handler.class = null - sasl.login.class = null - sasl.login.connect.timeout.ms = null - sasl.login.read.timeout.ms = null - sasl.login.refresh.buffer.seconds = 300 - sasl.login.refresh.min.period.seconds = 60 - sasl.login.refresh.window.factor = 0.8 - sasl.login.refresh.window.jitter = 0.05 - sasl.login.retry.backoff.max.ms = 10000 - sasl.login.retry.backoff.ms = 100 - sasl.mechanism.controller.protocol = GSSAPI - sasl.mechanism.inter.broker.protocol = GSSAPI - sasl.oauthbearer.clock.skew.seconds = 30 - sasl.oauthbearer.expected.audience = null - sasl.oauthbearer.expected.issuer = null - sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 - sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 - sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 - sasl.oauthbearer.jwks.endpoint.url = null - sasl.oauthbearer.scope.claim.name = scope - sasl.oauthbearer.sub.claim.name = sub - sasl.oauthbearer.token.endpoint.url = null - sasl.server.callback.handler.class = null - security.inter.broker.protocol = PLAINTEXT - security.providers = null - socket.connection.setup.timeout.max.ms = 30000 - socket.connection.setup.timeout.ms = 10000 - socket.receive.buffer.bytes = 102400 - socket.request.max.bytes = 104857600 - socket.send.buffer.bytes = 102400 - ssl.cipher.suites = [] - ssl.client.auth = none - ssl.enabled.protocols = [TLSv1.2, TLSv1.3] - ssl.endpoint.identification.algorithm = https - ssl.engine.factory.class = null - ssl.key.password = null - ssl.keymanager.algorithm = SunX509 - ssl.keystore.certificate.chain = null - ssl.keystore.key = null - ssl.keystore.location = null - ssl.keystore.password = null - ssl.keystore.type = JKS - ssl.principal.mapping.rules = DEFAULT - ssl.protocol = TLSv1.3 - ssl.provider = null - ssl.secure.random.implementation = null - ssl.trustmanager.algorithm = PKIX - ssl.truststore.certificates = null - ssl.truststore.location = null - ssl.truststore.password = null - ssl.truststore.type = JKS - transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 - transaction.max.timeout.ms = 900000 - transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 - transaction.state.log.load.buffer.size = 5242880 - transaction.state.log.min.isr = 1 - transaction.state.log.num.partitions = 50 - transaction.state.log.replication.factor = 1 - transaction.state.log.segment.bytes = 104857600 - transactional.id.expiration.ms = 604800000 - unclean.leader.election.enable = false - zookeeper.clientCnxnSocket = null - zookeeper.connect = localhost:2181 - zookeeper.connection.timeout.ms = 18000 - zookeeper.max.in.flight.requests = 10 - zookeeper.session.timeout.ms = 18000 - zookeeper.set.acl = false - zookeeper.ssl.cipher.suites = null - zookeeper.ssl.client.enable = false - zookeeper.ssl.crl.enable = false - zookeeper.ssl.enabled.protocols = null - zookeeper.ssl.endpoint.identification.algorithm = HTTPS - zookeeper.ssl.keystore.location = null - zookeeper.ssl.keystore.password = null - zookeeper.ssl.keystore.type = null - zookeeper.ssl.ocsp.enable = false - zookeeper.ssl.protocol = TLSv1.2 - zookeeper.ssl.truststore.location = null - zookeeper.ssl.truststore.password = null - zookeeper.ssl.truststore.type = null - zookeeper.sync.time.ms = 2000 - (kafka.server.KafkaConfig) -[2022-04-26 22:13:45,526] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) -[2022-04-26 22:13:45,527] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) -[2022-04-26 22:13:45,529] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) -[2022-04-26 22:13:45,531] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) -[2022-04-26 22:13:45,556] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager) -[2022-04-26 22:13:45,588] INFO Loading logs from log dirs ArraySeq(/tmp/kafka-logs) (kafka.log.LogManager) -[2022-04-26 22:13:45,595] INFO Attempting recovery for all logs in /tmp/kafka-logs since no clean shutdown file was found (kafka.log.LogManager) -[2022-04-26 22:13:45,601] INFO Loaded 0 logs in 13ms. (kafka.log.LogManager) -[2022-04-26 22:13:45,601] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) -[2022-04-26 22:13:45,603] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) -[2022-04-26 22:13:46,166] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread) -[2022-04-26 22:13:46,368] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) -[2022-04-26 22:13:46,374] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) -[2022-04-26 22:13:46,406] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) -[2022-04-26 22:13:46,415] INFO [BrokerToControllerChannelManager broker=0 name=alterIsr]: Starting (kafka.server.BrokerToControllerRequestThread) -[2022-04-26 22:13:46,444] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,445] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,446] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,448] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,466] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) -[2022-04-26 22:13:46,495] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) -[2022-04-26 22:13:46,534] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1650978826512,1650978826512,1,0,0,72191851212439552,202,0,25 - (kafka.zk.KafkaZkClient) -[2022-04-26 22:13:46,535] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient) -[2022-04-26 22:13:46,626] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,634] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,635] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,648] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) -[2022-04-26 22:13:46,668] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:13:46,678] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) -[2022-04-26 22:13:46,680] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:13:46,705] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) -[2022-04-26 22:13:46,709] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) -[2022-04-26 22:13:46,710] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) -[2022-04-26 22:13:46,713] INFO Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache) -[2022-04-26 22:13:46,749] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) -[2022-04-26 22:13:46,774] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) -[2022-04-26 22:13:46,786] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer) -[2022-04-26 22:13:46,792] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) -[2022-04-26 22:13:46,792] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer) -[2022-04-26 22:13:46,799] INFO Kafka version: 3.1.0 (org.apache.kafka.common.utils.AppInfoParser) -[2022-04-26 22:13:46,799] INFO Kafka commitId: 37edeed0777bacb3 (org.apache.kafka.common.utils.AppInfoParser) -[2022-04-26 22:13:46,799] INFO Kafka startTimeMs: 1650978826792 (org.apache.kafka.common.utils.AppInfoParser) -[2022-04-26 22:13:46,801] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) -[2022-04-26 22:13:46,879] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) -[2022-04-26 22:13:46,920] INFO [BrokerToControllerChannelManager broker=0 name=alterIsr]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread) -[2022-04-26 22:17:10,817] INFO Creating topic roy-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient) -[2022-04-26 22:17:10,962] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(roy-topic-0) (kafka.server.ReplicaFetcherManager) -[2022-04-26 22:17:11,042] INFO [LogLoader partition=roy-topic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:17:11,059] INFO Created log for partition roy-topic-0 in /tmp/kafka-logs/roy-topic-0 with properties {} (kafka.log.LogManager) -[2022-04-26 22:17:11,061] INFO [Partition roy-topic-0 broker=0] No checkpointed highwatermark is found for partition roy-topic-0 (kafka.cluster.Partition) -[2022-04-26 22:17:11,062] INFO [Partition roy-topic-0 broker=0] Log loaded for partition roy-topic-0 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,413] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 23 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 49 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient) -[2022-04-26 22:23:26,765] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) -[2022-04-26 22:23:26,770] INFO [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,771] INFO Created log for partition __consumer_offsets-3 in /tmp/kafka-logs/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,772] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) -[2022-04-26 22:23:26,772] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,794] INFO [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,795] INFO Created log for partition __consumer_offsets-18 in /tmp/kafka-logs/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,795] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) -[2022-04-26 22:23:26,795] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,820] INFO [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,821] INFO Created log for partition __consumer_offsets-41 in /tmp/kafka-logs/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,821] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) -[2022-04-26 22:23:26,821] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,844] INFO [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,845] INFO Created log for partition __consumer_offsets-10 in /tmp/kafka-logs/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,845] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) -[2022-04-26 22:23:26,845] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,867] INFO [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,867] INFO Created log for partition __consumer_offsets-33 in /tmp/kafka-logs/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,867] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) -[2022-04-26 22:23:26,868] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,889] INFO [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,890] INFO Created log for partition __consumer_offsets-48 in /tmp/kafka-logs/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,890] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) -[2022-04-26 22:23:26,890] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,916] INFO [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,917] INFO Created log for partition __consumer_offsets-19 in /tmp/kafka-logs/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,917] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) -[2022-04-26 22:23:26,917] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,946] INFO [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,946] INFO Created log for partition __consumer_offsets-34 in /tmp/kafka-logs/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,947] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) -[2022-04-26 22:23:26,947] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,969] INFO [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,970] INFO Created log for partition __consumer_offsets-4 in /tmp/kafka-logs/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,970] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) -[2022-04-26 22:23:26,970] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:26,995] INFO [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:26,996] INFO Created log for partition __consumer_offsets-11 in /tmp/kafka-logs/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:26,996] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) -[2022-04-26 22:23:26,996] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,019] INFO [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,020] INFO Created log for partition __consumer_offsets-26 in /tmp/kafka-logs/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,020] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) -[2022-04-26 22:23:27,020] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,044] INFO [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,045] INFO Created log for partition __consumer_offsets-49 in /tmp/kafka-logs/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,045] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) -[2022-04-26 22:23:27,045] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,069] INFO [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,070] INFO Created log for partition __consumer_offsets-39 in /tmp/kafka-logs/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,070] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) -[2022-04-26 22:23:27,070] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,094] INFO [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,095] INFO Created log for partition __consumer_offsets-9 in /tmp/kafka-logs/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,096] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) -[2022-04-26 22:23:27,096] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,120] INFO [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,121] INFO Created log for partition __consumer_offsets-24 in /tmp/kafka-logs/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,121] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) -[2022-04-26 22:23:27,121] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,144] INFO [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,145] INFO Created log for partition __consumer_offsets-31 in /tmp/kafka-logs/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,145] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) -[2022-04-26 22:23:27,146] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,170] INFO [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,171] INFO Created log for partition __consumer_offsets-46 in /tmp/kafka-logs/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,172] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) -[2022-04-26 22:23:27,172] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,194] INFO [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,195] INFO Created log for partition __consumer_offsets-1 in /tmp/kafka-logs/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,195] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) -[2022-04-26 22:23:27,195] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,217] INFO [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,218] INFO Created log for partition __consumer_offsets-16 in /tmp/kafka-logs/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,218] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) -[2022-04-26 22:23:27,218] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,241] INFO [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,242] INFO Created log for partition __consumer_offsets-2 in /tmp/kafka-logs/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,242] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) -[2022-04-26 22:23:27,242] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,267] INFO [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,268] INFO Created log for partition __consumer_offsets-25 in /tmp/kafka-logs/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,268] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) -[2022-04-26 22:23:27,268] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,291] INFO [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,292] INFO Created log for partition __consumer_offsets-40 in /tmp/kafka-logs/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,292] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) -[2022-04-26 22:23:27,292] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,319] INFO [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,319] INFO Created log for partition __consumer_offsets-47 in /tmp/kafka-logs/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,319] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) -[2022-04-26 22:23:27,319] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,340] INFO [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,341] INFO Created log for partition __consumer_offsets-17 in /tmp/kafka-logs/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,341] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) -[2022-04-26 22:23:27,341] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,363] INFO [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,364] INFO Created log for partition __consumer_offsets-32 in /tmp/kafka-logs/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,364] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) -[2022-04-26 22:23:27,364] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,386] INFO [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,387] INFO Created log for partition __consumer_offsets-37 in /tmp/kafka-logs/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,387] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) -[2022-04-26 22:23:27,387] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,412] INFO [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,412] INFO Created log for partition __consumer_offsets-7 in /tmp/kafka-logs/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,412] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) -[2022-04-26 22:23:27,412] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,437] INFO [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,438] INFO Created log for partition __consumer_offsets-22 in /tmp/kafka-logs/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,439] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) -[2022-04-26 22:23:27,439] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,460] INFO [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,460] INFO Created log for partition __consumer_offsets-29 in /tmp/kafka-logs/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,461] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) -[2022-04-26 22:23:27,461] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,485] INFO [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,486] INFO Created log for partition __consumer_offsets-44 in /tmp/kafka-logs/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,486] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) -[2022-04-26 22:23:27,486] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,512] INFO [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,513] INFO Created log for partition __consumer_offsets-14 in /tmp/kafka-logs/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,513] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) -[2022-04-26 22:23:27,513] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,542] INFO [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,543] INFO Created log for partition __consumer_offsets-23 in /tmp/kafka-logs/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,544] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) -[2022-04-26 22:23:27,544] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,566] INFO [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,567] INFO Created log for partition __consumer_offsets-38 in /tmp/kafka-logs/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,567] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) -[2022-04-26 22:23:27,567] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,595] INFO [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,596] INFO Created log for partition __consumer_offsets-8 in /tmp/kafka-logs/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,596] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) -[2022-04-26 22:23:27,596] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,620] INFO [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,621] INFO Created log for partition __consumer_offsets-45 in /tmp/kafka-logs/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,621] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) -[2022-04-26 22:23:27,621] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,646] INFO [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,647] INFO Created log for partition __consumer_offsets-15 in /tmp/kafka-logs/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,647] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) -[2022-04-26 22:23:27,647] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,676] INFO [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,677] INFO Created log for partition __consumer_offsets-30 in /tmp/kafka-logs/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,677] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) -[2022-04-26 22:23:27,677] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,701] INFO [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,702] INFO Created log for partition __consumer_offsets-0 in /tmp/kafka-logs/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,702] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,702] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,730] INFO [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,731] INFO Created log for partition __consumer_offsets-35 in /tmp/kafka-logs/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,731] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) -[2022-04-26 22:23:27,731] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,759] INFO [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,759] INFO Created log for partition __consumer_offsets-5 in /tmp/kafka-logs/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,760] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) -[2022-04-26 22:23:27,760] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,781] INFO [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,782] INFO Created log for partition __consumer_offsets-20 in /tmp/kafka-logs/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,782] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) -[2022-04-26 22:23:27,783] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,808] INFO [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,809] INFO Created log for partition __consumer_offsets-27 in /tmp/kafka-logs/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,809] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) -[2022-04-26 22:23:27,809] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,834] INFO [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,834] INFO Created log for partition __consumer_offsets-42 in /tmp/kafka-logs/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,834] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) -[2022-04-26 22:23:27,834] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,861] INFO [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,862] INFO Created log for partition __consumer_offsets-12 in /tmp/kafka-logs/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,862] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) -[2022-04-26 22:23:27,862] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,887] INFO [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,887] INFO Created log for partition __consumer_offsets-21 in /tmp/kafka-logs/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,887] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) -[2022-04-26 22:23:27,887] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,912] INFO [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,913] INFO Created log for partition __consumer_offsets-36 in /tmp/kafka-logs/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,913] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) -[2022-04-26 22:23:27,913] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,939] INFO [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,940] INFO Created log for partition __consumer_offsets-6 in /tmp/kafka-logs/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,940] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) -[2022-04-26 22:23:27,940] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,965] INFO [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,966] INFO Created log for partition __consumer_offsets-43 in /tmp/kafka-logs/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,966] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) -[2022-04-26 22:23:27,966] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:27,991] INFO [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:27,992] INFO Created log for partition __consumer_offsets-13 in /tmp/kafka-logs/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:27,992] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) -[2022-04-26 22:23:27,992] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:28,018] INFO [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) -[2022-04-26 22:23:28,018] INFO Created log for partition __consumer_offsets-28 in /tmp/kafka-logs/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) -[2022-04-26 22:23:28,018] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) -[2022-04-26 22:23:28,019] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) -[2022-04-26 22:23:28,038] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,040] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,042] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,042] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,042] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,043] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,043] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,044] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,044] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,045] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,045] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,046] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,046] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,047] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,047] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,051] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,052] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,052] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,053] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,053] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,053] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,054] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,054] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,054] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,054] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,054] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,055] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,056] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,056] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,056] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,056] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,057] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,057] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,057] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,057] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,058] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,059] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,059] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,059] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,059] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,059] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,060] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,061] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,062] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,062] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) -[2022-04-26 22:23:28,185] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group console-consumer-43896 in Empty state. Created a new member id console-consumer-bbfe5584-7b5a-4441-8819-85f3ec935893 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,198] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-43896 in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member console-consumer-bbfe5584-7b5a-4441-8819-85f3ec935893 with group instance id None) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,207] INFO [GroupCoordinator 0]: Stabilized group console-consumer-43896 generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:23:28,220] INFO [GroupCoordinator 0]: Assignment received from leader console-consumer-bbfe5584-7b5a-4441-8819-85f3ec935893 for group console-consumer-43896 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:26:48,156] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-43896 in state PreparingRebalance with old generation 1 (__consumer_offsets-21) (reason: Removing member console-consumer-bbfe5584-7b5a-4441-8819-85f3ec935893 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:26:48,157] INFO [GroupCoordinator 0]: Group console-consumer-43896 with generation 2 is now empty (__consumer_offsets-21) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:26:48,160] INFO [GroupCoordinator 0]: Member MemberMetadata(memberId=console-consumer-bbfe5584-7b5a-4441-8819-85f3ec935893, groupInstanceId=None, clientId=console-consumer, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-43896 through explicit `LeaveGroup` request (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:07,923] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group console-consumer-45419 in Empty state. Created a new member id console-consumer-91e84dc2-038b-4185-9b3f-c3ebe2474764 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:07,926] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-45419 in state PreparingRebalance with old generation 0 (__consumer_offsets-14) (reason: Adding new member console-consumer-91e84dc2-038b-4185-9b3f-c3ebe2474764 with group instance id None) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:07,927] INFO [GroupCoordinator 0]: Stabilized group console-consumer-45419 generation 1 (__consumer_offsets-14) with 1 members (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:07,934] INFO [GroupCoordinator 0]: Assignment received from leader console-consumer-91e84dc2-038b-4185-9b3f-c3ebe2474764 for group console-consumer-45419 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:10,988] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group console-consumer-47496 in Empty state. Created a new member id console-consumer-611636df-26b0-4d89-bae0-d2d8b113fb99 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:10,997] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-47496 in state PreparingRebalance with old generation 0 (__consumer_offsets-41) (reason: Adding new member console-consumer-611636df-26b0-4d89-bae0-d2d8b113fb99 with group instance id None) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:10,997] INFO [GroupCoordinator 0]: Stabilized group console-consumer-47496 generation 1 (__consumer_offsets-41) with 1 members (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:11,004] INFO [GroupCoordinator 0]: Assignment received from leader console-consumer-611636df-26b0-4d89-bae0-d2d8b113fb99 for group console-consumer-47496 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:15,479] INFO [GroupCoordinator 0]: Dynamic member with unknown member id joins group console-consumer-53603 in Empty state. Created a new member id console-consumer-d947ba68-da40-400d-aafc-4c9e2e86ccf9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:15,481] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-53603 in state PreparingRebalance with old generation 0 (__consumer_offsets-38) (reason: Adding new member console-consumer-d947ba68-da40-400d-aafc-4c9e2e86ccf9 with group instance id None) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:15,482] INFO [GroupCoordinator 0]: Stabilized group console-consumer-53603 generation 1 (__consumer_offsets-38) with 1 members (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:27:15,489] INFO [GroupCoordinator 0]: Assignment received from leader console-consumer-d947ba68-da40-400d-aafc-4c9e2e86ccf9 for group console-consumer-53603 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:28,252] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-53603 in state PreparingRebalance with old generation 1 (__consumer_offsets-38) (reason: Removing member console-consumer-d947ba68-da40-400d-aafc-4c9e2e86ccf9 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:28,252] INFO [GroupCoordinator 0]: Group console-consumer-53603 with generation 2 is now empty (__consumer_offsets-38) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:28,254] INFO [GroupCoordinator 0]: Member MemberMetadata(memberId=console-consumer-d947ba68-da40-400d-aafc-4c9e2e86ccf9, groupInstanceId=None, clientId=console-consumer, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-53603 through explicit `LeaveGroup` request (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:29,831] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-47496 in state PreparingRebalance with old generation 1 (__consumer_offsets-41) (reason: Removing member console-consumer-611636df-26b0-4d89-bae0-d2d8b113fb99 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:29,831] INFO [GroupCoordinator 0]: Group console-consumer-47496 with generation 2 is now empty (__consumer_offsets-41) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:29,838] INFO [GroupCoordinator 0]: Member MemberMetadata(memberId=console-consumer-611636df-26b0-4d89-bae0-d2d8b113fb99, groupInstanceId=None, clientId=console-consumer, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-47496 through explicit `LeaveGroup` request (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:30,755] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-45419 in state PreparingRebalance with old generation 1 (__consumer_offsets-14) (reason: Removing member console-consumer-91e84dc2-038b-4185-9b3f-c3ebe2474764 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:30,755] INFO [GroupCoordinator 0]: Group console-consumer-45419 with generation 2 is now empty (__consumer_offsets-14) (kafka.coordinator.group.GroupCoordinator) -[2022-04-26 22:31:30,761] INFO [GroupCoordinator 0]: Member MemberMetadata(memberId=console-consumer-91e84dc2-038b-4185-9b3f-c3ebe2474764, groupInstanceId=None, clientId=console-consumer, clientHost=/127.0.0.1, sessionTimeoutMs=45000, rebalanceTimeoutMs=300000, supportedProtocols=List(range, cooperative-sticky)) has left group console-consumer-45419 through explicit `LeaveGroup` request (kafka.coordinator.group.GroupCoordinator) diff --git a/thirdparty/kafka_2.13-3.1.0/logs/state-change.log b/thirdparty/kafka_2.13-3.1.0/logs/state-change.log deleted file mode 100644 index be53a6e..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/state-change.log +++ /dev/null @@ -1,172 +0,0 @@ -[2022-04-26 22:13:46,769] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 0 partitions (state.change.logger) -[2022-04-26 22:17:10,880] INFO [Controller id=0 epoch=1] Changed partition roy-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:17:10,880] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:17:10,883] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:17:10,934] INFO [Controller id=0 epoch=1] Changed partition roy-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:17:10,935] INFO [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 1 become-leader and 0 become-follower partitions (state.change.logger) -[2022-04-26 22:17:10,937] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 1 partitions (state.change.logger) -[2022-04-26 22:17:10,938] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:17:10,941] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 1 from controller 0 for 1 partitions (state.change.logger) -[2022-04-26 22:17:10,963] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 0 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) -[2022-04-26 22:17:11,064] INFO [Broker id=0] Leader roy-topic-0 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:17:11,087] INFO [Broker id=0] Finished LeaderAndIsr request in 148ms correlationId 1 from controller 0 for 1 partitions (state.change.logger) -[2022-04-26 22:17:11,096] INFO [Broker id=0] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 2 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,455] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,456] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,457] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 0 (state.change.logger) -[2022-04-26 22:23:26,458] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:23:26,459] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:23:26,736] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,737] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=0, leaderEpoch=0, isr=List(0), zkVersion=0) (state.change.logger) -[2022-04-26 22:23:26,738] INFO [Controller id=0 epoch=1] Sending LeaderAndIsr request to broker 0 with 50 become-leader and 0 become-follower partitions (state.change.logger) -[2022-04-26 22:23:26,739] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet(0) for 50 partitions (state.change.logger) -[2022-04-26 22:23:26,740] INFO [Controller id=0 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) -[2022-04-26 22:23:26,740] INFO [Broker id=0] Handling LeaderAndIsr request correlationId 3 from controller 0 for 50 partitions (state.change.logger) -[2022-04-26 22:23:26,765] INFO [Broker id=0] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 0 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) -[2022-04-26 22:23:26,772] INFO [Broker id=0] Leader __consumer_offsets-3 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,795] INFO [Broker id=0] Leader __consumer_offsets-18 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,822] INFO [Broker id=0] Leader __consumer_offsets-41 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,845] INFO [Broker id=0] Leader __consumer_offsets-10 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,868] INFO [Broker id=0] Leader __consumer_offsets-33 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,891] INFO [Broker id=0] Leader __consumer_offsets-48 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,918] INFO [Broker id=0] Leader __consumer_offsets-19 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,947] INFO [Broker id=0] Leader __consumer_offsets-34 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,970] INFO [Broker id=0] Leader __consumer_offsets-4 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:26,997] INFO [Broker id=0] Leader __consumer_offsets-11 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,020] INFO [Broker id=0] Leader __consumer_offsets-26 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,045] INFO [Broker id=0] Leader __consumer_offsets-49 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,070] INFO [Broker id=0] Leader __consumer_offsets-39 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,096] INFO [Broker id=0] Leader __consumer_offsets-9 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,121] INFO [Broker id=0] Leader __consumer_offsets-24 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,146] INFO [Broker id=0] Leader __consumer_offsets-31 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,172] INFO [Broker id=0] Leader __consumer_offsets-46 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,195] INFO [Broker id=0] Leader __consumer_offsets-1 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,218] INFO [Broker id=0] Leader __consumer_offsets-16 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,242] INFO [Broker id=0] Leader __consumer_offsets-2 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,268] INFO [Broker id=0] Leader __consumer_offsets-25 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,292] INFO [Broker id=0] Leader __consumer_offsets-40 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,320] INFO [Broker id=0] Leader __consumer_offsets-47 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,341] INFO [Broker id=0] Leader __consumer_offsets-17 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,364] INFO [Broker id=0] Leader __consumer_offsets-32 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,387] INFO [Broker id=0] Leader __consumer_offsets-37 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,413] INFO [Broker id=0] Leader __consumer_offsets-7 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,439] INFO [Broker id=0] Leader __consumer_offsets-22 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,461] INFO [Broker id=0] Leader __consumer_offsets-29 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,486] INFO [Broker id=0] Leader __consumer_offsets-44 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,513] INFO [Broker id=0] Leader __consumer_offsets-14 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,544] INFO [Broker id=0] Leader __consumer_offsets-23 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,567] INFO [Broker id=0] Leader __consumer_offsets-38 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,596] INFO [Broker id=0] Leader __consumer_offsets-8 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,621] INFO [Broker id=0] Leader __consumer_offsets-45 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,647] INFO [Broker id=0] Leader __consumer_offsets-15 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,677] INFO [Broker id=0] Leader __consumer_offsets-30 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,702] INFO [Broker id=0] Leader __consumer_offsets-0 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,731] INFO [Broker id=0] Leader __consumer_offsets-35 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,760] INFO [Broker id=0] Leader __consumer_offsets-5 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,783] INFO [Broker id=0] Leader __consumer_offsets-20 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,809] INFO [Broker id=0] Leader __consumer_offsets-27 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,835] INFO [Broker id=0] Leader __consumer_offsets-42 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,862] INFO [Broker id=0] Leader __consumer_offsets-12 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,888] INFO [Broker id=0] Leader __consumer_offsets-21 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,913] INFO [Broker id=0] Leader __consumer_offsets-36 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,940] INFO [Broker id=0] Leader __consumer_offsets-6 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,966] INFO [Broker id=0] Leader __consumer_offsets-43 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:27,992] INFO [Broker id=0] Leader __consumer_offsets-13 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:28,019] INFO [Broker id=0] Leader __consumer_offsets-28 starts at leader epoch 0 from offset 0 with high watermark 0 ISR [0] addingReplicas [] removingReplicas []. Previous leader epoch was -1. (state.change.logger) -[2022-04-26 22:23:28,047] INFO [Broker id=0] Finished LeaderAndIsr request in 1307ms correlationId 3 from controller 0 for 50 partitions (state.change.logger) -[2022-04-26 22:23:28,050] INFO [Broker id=0] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 0 epoch 1 with correlation id 4 (state.change.logger) diff --git a/thirdparty/kafka_2.13-3.1.0/logs/zookeeper-gc.log b/thirdparty/kafka_2.13-3.1.0/logs/zookeeper-gc.log deleted file mode 100644 index 8c5b6c2..0000000 --- a/thirdparty/kafka_2.13-3.1.0/logs/zookeeper-gc.log +++ /dev/null @@ -1,16 +0,0 @@ -[2022-04-26T22:11:28.578+0900][gc,heap] Heap region size: 1M -[2022-04-26T22:11:28.583+0900][gc ] Using G1 -[2022-04-26T22:11:28.583+0900][gc,heap,coops] Heap address: 0x00000007e0000000, size: 512 MB, Compressed Oops mode: Zero based, Oop shift amount: 3 -[2022-04-26T22:11:29.431+0900][gc,start ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) -[2022-04-26T22:11:29.432+0900][gc,task ] GC(0) Using 8 workers of 8 for evacuation -[2022-04-26T22:11:29.437+0900][gc,phases ] GC(0) Pre Evacuate Collection Set: 0.0ms -[2022-04-26T22:11:29.437+0900][gc,phases ] GC(0) Evacuate Collection Set: 4.7ms -[2022-04-26T22:11:29.437+0900][gc,phases ] GC(0) Post Evacuate Collection Set: 0.4ms -[2022-04-26T22:11:29.437+0900][gc,phases ] GC(0) Other: 1.0ms -[2022-04-26T22:11:29.437+0900][gc,heap ] GC(0) Eden regions: 25->0(21) -[2022-04-26T22:11:29.437+0900][gc,heap ] GC(0) Survivor regions: 0->4(4) -[2022-04-26T22:11:29.437+0900][gc,heap ] GC(0) Old regions: 0->4 -[2022-04-26T22:11:29.437+0900][gc,heap ] GC(0) Humongous regions: 0->0 -[2022-04-26T22:11:29.437+0900][gc,metaspace ] GC(0) Metaspace: 15062K->15062K(1062912K) -[2022-04-26T22:11:29.437+0900][gc ] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 25M->7M(512M) 6.276ms -[2022-04-26T22:11:29.437+0900][gc,cpu ] GC(0) User=0.03s Sys=0.01s Real=0.00s diff --git a/thirdparty/kafka_2.13-3.1.0/site-docs/kafka_2.13-3.1.0-site-docs.tgz b/thirdparty/kafka_2.13-3.1.0/site-docs/kafka_2.13-3.1.0-site-docs.tgz deleted file mode 100644 index b318c1a..0000000 Binary files a/thirdparty/kafka_2.13-3.1.0/site-docs/kafka_2.13-3.1.0-site-docs.tgz and /dev/null differ