天天看點

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

第5章 Kafka叢集部署

根據前面章節的介紹,知道了Fabric組網過程的第一步是需要生成證書等檔案,而這些預設配置資訊的生成依賴于configtx.yaml及crypto-config.yaml配置檔案。

在采用Kafka作為啟動過類型的Fabric網絡中,configtx.yaml 及 cryto-config.yaml配置檔案依然有着重要的地位,但是其中的配置樣本與先前的内容會有些不同。

本章将進行基于Kafka叢集的部署,其中重要的概念是對前三章的總結,也是對本章及後續章節關于智能合約及CouchDB的鋪墊。

5.1 Fabric賬本

  1. 賬本 (Ledger)

即所有的狀态變更是有序且不可篡改的。狀态變更是由參與方送出的chaincode(智能合約)調用事務(transactions)的結果。每個事務都将産生一組資産鍵-值對,這些鍵-值對用于建立、更新或删除而送出給賬本。

賬本由BlockChain(區塊鍊)組成,區塊則用來存儲有序且不可篡改的記錄,以及儲存目前狀态的狀态資料庫。在每一個Channel中都會存在一個賬本。每一個Peer都會維護它作為其中成員的每一個Channel中的本地複制的賬本。

鍊是一個事務日志,是一個由Hash連結各個區塊的結構,其中每個區塊都包含了N個事務的序列。

區塊header包含了該區塊的事務的Hash,以及上一個區塊頭的Hash。這樣,所有在賬本上的交易都是按順序排列的,并以密碼方式連結在一起。也就是說在不破壞Hash 連結的情況下篡改賬本資料是不可能的。最近的區塊Hash代表了以前的每個事務,進而確定所有的Peers都處于一緻和可信的狀态。

鍊存儲Peer檔案系統(本地或附件存儲)上,有效的支援BlockChain工作負載的應用程式的特性。

  1. 狀态資料庫

該賬本的目前狀态資料表示鍊事務日志中包含的所有值的最新值。

由于目前狀态表示Channel所知道的全部最新鍵值,是以有時稱為“World State”。

在Chaincode調用對目前狀态資料執行操作的事務時,為了使這些Chaincode互動非常有效,所有的鍵的最新值都存儲在一個狀态資料庫中。狀态資料庫隻是一個索引視圖到鍊的事務日志,是以可以在任何時候從鍊中重新生成。在事務被接受之前,狀态資料庫将自動恢複或者在需要時生成。

狀态資料庫包括LevelDB 和 CouchDB 。 LevelDB 是嵌入在Peer程序中的預設狀态資料庫,并将Chaincode資料存儲為鍵-值對。CouchDB是一個可選的外部狀态資料庫,所寫的Chaincode資料被模組化為JSON時,它提供了額外的查詢支援,允許對JSON内容進行豐富的查詢。

  1. 事務流

在高層業務邏輯處理上,事務流是由應用程式用戶端發送的事務協定,該協定最終發送到指定的背書節點。

背書節點會驗證用戶端的簽名,并執行一個Chaincode函數來模拟事務。最終傳回給用戶端的是Chaincode結果,即一組在Chaincode讀集中擷取的鍵值版本,以及在Chaincode寫集中寫入的鍵值集合,傳回該Peer執行Chaincode後模拟出來的讀寫集結果,同時還會附帶一個背書簽名。

用戶端将背書組合成一個事務payload,并将其廣播至一個ordering service (排序服務節點),ordering service 為目前Channel 上的所有Peers 提供排序服務并生成區塊。

實際上,用戶端在将事務廣播到排序服務之前,先将本次請求送出到Peer ,由Peer驗證事務。

首先,Peer将檢查背書政策,以確定指定的Peer的正确配置設定已經簽署了結果,并且将根據事務payload對簽名進行身份驗證。

其次,Peer 将對事務集進行版本控制 , 以確定資料完整性 , 并防止諸如重複開銷之類問題。

5.2 事務處理流程

此處将介紹在标準資産交換過程中發生的事務機制。這個場景包括兩個客戶,A和B,他們在購買和銷售蘿蔔(産品)。他們每個人在網絡上都已一個Peer,通過這個網絡,他們發送自己的交易,并與Ledger(賬本)進行互動,如下圖所示:

假設這個事務流中有一個Channel被設定并運作。應用程式用戶端及該組織的證書頒發機構均已注冊,并獲得了必要的加密資料,用于對網絡進行身份驗證。

Chaincode(包含一組表示蘿蔔市場的初始狀态的鍵值對)被安裝在Peers上,并在Channel上執行個體化。Chaincode包含定義一組事務指令的邏輯,以及一個蘿蔔(商品)的價格。該Chaincode也确定一個背書政策,即peerA和peerB都必須支援任何交易。

完整具體處理流程如下:

5.3 Kafka叢集配置

搭建Kafka叢集的最小機關組成如下:

  • 3個Zookeeper節點叢集
  • 4個Kafka節點叢集
  • 3個Orderer排序服務節點
  • 其他Peer節點

以上叢集至少需要10個服務節點提供叢集服務,其餘節點用于背書驗證、送出及資料同步。

準備工作:

名稱 IP hostname 組織機構
Zk1 172.31.159.137 zookeeper1
Zk2 172.31.159.135 zookeeper2
Zk3 172.31.159.136 zookeeper3
Kafka1 172.31.159.133 kafka1
Kafka2 172.31.159.132 kafka2
Kafka3 172.31.159.134 kafka3
Kafka4 172.31.159.131 kafka4
Orderer0 172.31.159.130 orderer0.example.com
Orderer1 172.31.143.22 orderer1.example.com
Orderer2 172.31.143.23 orderer2.example.com
peer0 172.31.159.129 peer0.org1.example.com Org1
peer1 172.31.143.21 peer1.org2.example.com Org2

如果考慮到高可用性,可以學習參考K8s管理Docker的方案。

在這些伺服器中,每一台都會安裝Docker、Docker-Compose環境,而Orderer排序伺服器及Peer節點伺服器會額外的安裝Go及Fabric環境。

所有基本的環境部署與前面章節一緻,是以有些資源可以直接使用。

5.3.1 crypto-config.yaml 配置

OrdererOrgs:
  - Name: Orderer
    Domain: example.com
    Specs:
      - Hostname: orderer0
      - Hostname: orderer1
      - Hostname: orderer2

PeerOrgs:
  - Name: Org1
    Domain: org1.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org2
    Domain: org2.example.com
    Template:
      Count: 2
    Users:
      Count: 1
    Specs:
      - Hostname: foo
        CommonName: foo27.org2.example.com
      - Hostname: bar
      - Hostname: baz


  - Name: Org3
    Domain: org3.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org4
    Domain: org4.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org5
    Domain: org5.example.com
    Template:
      Count: 2
    Users:
      Count: 1
           

将該配置檔案上傳到Orderer0伺服器 aberic目錄下,執行如下指令生成節點所需配置檔案:

./bin/cryptogen generate --config=./crypto-config.yaml
           

執行完畢可以在如下目錄檢視到自定義節點的目錄資訊

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

5.3.2 configtx配置

由于本次采用的是kafka叢集部署,是以本次檔案配置中啟動類型應該為 “kafka”。還需要在Address中将Orderer可用排序服務即叢集排序伺服器的位址補全,在kafka的Brokers中可填寫非全量Kafka叢集所用伺服器IP或域名。

configtx.yaml 具體配置如下:

Profiles:

    TwoOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2
                    - *Org3
                    - *Org4
                    - *Org5
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3
                - *Org4
                - *Org5

Organizations:

    - &OrdererOrg
        Name: OrdererMSP
        ID: OrdererMSP
        MSPDir: crypto-config/ordererOrganizations/example.com/msp

    - &Org1
        Name: Org1MSP
        ID: Org1MSP

        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp

        AnchorPeers:
            - Host: peer0.org1.example.com
              Port: 7051

    - &Org2
        Name: Org2MSP
        ID: Org2MSP

        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp

        AnchorPeers:
            - Host: peer0.org2.example.com
              Port: 7051

    - &Org3
        Name: Org3MSP
        ID: Org3MSP

        MSPDir: crypto-config/peerOrganizations/org3.example.com/msp

        AnchorPeers:
            - Host: peer0.org3.example.com
              Port: 7051

    - &Org4
        Name: Org4MSP
        ID: Org4MSP

        MSPDir: crypto-config/peerOrganizations/org4.example.com/msp

        AnchorPeers:
            - Host: peer0.org4.example.com
              Port: 7051

    - &Org5
        Name: Org5MSP
        ID: Org5MSP

        MSPDir: crypto-config/peerOrganizations/org5.example.com/msp

        AnchorPeers:
            - Host: peer0.org5.example.com
              Port: 7051

Orderer: &OrdererDefaults

    OrdererType: kafka

    Addresses:
        - orderer0.example.com:7050
        - orderer1.example.com:7050
        - orderer2.example.com:7050

    BatchTimeout: 2s

    BatchSize:

        MaxMessageCount: 10

        AbsoluteMaxBytes: 98 MB

        PreferredMaxBytes: 512 KB

    Kafka:
        Brokers:
            - 172.31.159.131:9092
            - 172.31.159.132:9092
            - 172.31.159.133:9092
            - 172.31.159.134:9092

    Organizations:

Application: &ApplicationDefaults

    Organizations:

Capabilities:
    Global: &ChannelCapabilities
        V1_1: true

    Orderer: &OrdererCapabilities
        V1_1: true

    Application: &ApplicationCapabilities
        V1_1: true
           

将該檔案上傳到Orderer0伺服器 aberic 目錄下,并執行如下指令:

./bin/configtxgen -profile TwoOrgsOrdererGensis -outputBlock ./channel-artifacts/genesis.block
           

創世區塊genesis.block是為了Orderer排序服務啟動時用到的,Peer節點在啟動後需要建立的Channel的配置檔案在這裡也一并生成,執行如下指令:

./bin/configtxgen -profile TwoOrgChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
           

5.3.3 Zookeeper配置

Zookeeper基本運轉要素如下:

  • 選舉Leader
  • 同步資料
  • 選舉Leader的算法有很多,但要達到的選舉标準是一緻的
  • Leader要具有最高的執行ID ,類似Root權限
  • 叢集中大多數的機器得到響應并跟随選出的Leader

配置檔案中的内容始終滿足以上5個要素。

docker-zookeeper1.yaml 檔案内容如下:

version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      # ID在集合中必須是唯一的并且應該有一個值
      # between 1 and 255.
      # 在1和255之間。
      - ZOO_MY_ID=1
      #
      # 組成ZK集合的伺服器清單。用戶端使用的清單必須與ZooKeeper伺服器清單所擁有的每一個ZK伺服器相比對。
      # 有兩個端口号 ,第一個是追随者用來連接配接上司者的,第二個是用于上司人選舉。
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-zookeeper2.yaml 檔案内容如下:

version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=2
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-zookeeper3.yaml 檔案内容如下:

version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=3
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           
注意:Zookeeper叢集的數量可以是3、5、 7 ,它的值是一個奇數避免 split-brain 的情況,同時選擇大于1的值是為了避免單點故障,如果叢集數量超過7個Zookeeper服務将會被認為 overkill , 即無法承受。

5.3.4 Kafka配置

Kafka 需要4份啟動配置檔案,docker-kafka1.yaml , docker-kafka2.yaml , docker-kafka3.yaml , docker-kafka4.yaml 。

docker-kafka1.yaml 檔案内容和解釋如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我們使用K和Z分别代表Kafka叢集和ZooKeeper叢集的節點個數
# 
# 1)K的最小值應該被設定為4(我們将會在第4步中解釋,這是為了滿足crash容錯的最小節點數。
#    如果有4個代理,那麼可以容錯一個代理崩潰,一個代理停止服務後,channel仍然可以繼續讀寫,新的channel可以被建立)
# 2)Z可以為3,5或是7。它的值需要是一個奇數避免腦裂(split-brain)情況,同時選擇大于1的值為了避免單點故障。
#    超過7個ZooKeeper servers會被認為overkill。
#

version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=1
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---設定一個M值(例如1<M<N,檢視下面的default.replication.factor)
      # 資料送出時會寫入至少M個副本(這些資料然後會被同步并且歸屬到in-sync 副本集合或ISR)。
      # 其它情況,寫入操作會傳回一個錯誤。接下來:
      # 1)如果channel寫入的資料多達N-M個副本變的不可用,操作可以正常執行。
      # 2)如果有更多的副本不可用,Kafka不可以維護一個有M數量的ISR集合,是以Kafka停止接收寫操作。Channel隻有當同步M個副本後才可以重新可以寫。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 設定一個值N,N<K。
      # 設定replication factor參數為N代表着每個channel都儲存N個副本的資料到Kafka的代理上。
      # 這些都是一個channel的ISR集合的候選。
      # 如同在上邊min.insync.replicas section設定部分所描述的,不是所有的代理(orderer)在任何時候都是可用的。
      # N的值必須小于K,如果少于N個代理的話,channel的建立是不能成功的。
      # 是以,如果設定N的值為K,一個代理失效後,那麼區塊鍊網絡将不能再建立新的channel---orderering service的crash容錯也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper節點的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中會設定最大的區塊大小(參考configtx.yaml中AbsoluteMaxBytes參數)。
      # 每個區塊最大有Orderer.AbsoluteMaxBytes個位元組(不包括頭部),假定這裡設定的值為A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes應該設定一個大于A。
      # 為header增加一些緩沖區空間---1MB已經足夠大。上述不同設定值之間滿足如下關系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes應該嚴格小于socket.request.max.bytes的值,socket.request.max.bytes的值預設被設定為100MB。
      # 如果想要區塊的大小大于100MB,需要編輯fabric/orderer/kafka/config.go檔案裡寫死的值brokerConfig.Producer.MaxMessageBytes,
      # 修改後重新編譯源碼得到二進制檔案,這種設定是不建議的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 試圖為每個通道擷取的消息的位元組數。
      # 這不是絕對最大值,如果擷取的資訊大于這個值,則仍然會傳回資訊,以確定可以取得進展。
      # 代理所接受的最大消息大小是通過上一條message.max.bytes定義的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 資料一緻性在區塊鍊環境中是至關重要的。
      # 我們不能從in-sync 副本(ISR)集合之外選取channel leader,
      # 否則我們将會面臨對于之前的leader産生的offsets覆寫的風險,
      # 這樣的結果是,orderers産生的區塊可能會重新寫入區塊鍊。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service對Kafka日志的修剪增加支援,
      # 否則需要關閉基于時間的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在寫本文章時在Kafka中已經預設關閉,是以不需要再次明确設定這個配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - "9092:9092"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-kafka2.yaml 檔案内容和解釋如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我們使用K和Z分别代表Kafka叢集和ZooKeeper叢集的節點個數
# 
# 1)K的最小值應該被設定為4(我們将會在第4步中解釋,這是為了滿足crash容錯的最小節點數。
#    如果有4個代理,那麼可以容錯一個代理崩潰,一個代理停止服務後,channel仍然可以繼續讀寫,新的channel可以被建立)
# 2)Z可以為3,5或是7。它的值需要是一個奇數避免腦裂(split-brain)情況,同時選擇大于1的值為了避免單點故障。
#    超過7個ZooKeeper servers會被認為overkill。
#

version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=2
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---設定一個M值(例如1<M<N,檢視下面的default.replication.factor)
      # 資料送出時會寫入至少M個副本(這些資料然後會被同步并且歸屬到in-sync 副本集合或ISR)。
      # 其它情況,寫入操作會傳回一個錯誤。接下來:
      # 1)如果channel寫入的資料多達N-M個副本變的不可用,操作可以正常執行。
      # 2)如果有更多的副本不可用,Kafka不可以維護一個有M數量的ISR集合,是以Kafka停止接收寫操作。Channel隻有當同步M個副本後才可以重新可以寫。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 設定一個值N,N<K。
      # 設定replication factor參數為N代表着每個channel都儲存N個副本的資料到Kafka的代理上。
      # 這些都是一個channel的ISR集合的候選。
      # 如同在上邊min.insync.replicas section設定部分所描述的,不是所有的代理(orderer)在任何時候都是可用的。
      # N的值必須小于K,如果少于N個代理的話,channel的建立是不能成功的。
      # 是以,如果設定N的值為K,一個代理失效後,那麼區塊鍊網絡将不能再建立新的channel---orderering service的crash容錯也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper節點的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中會設定最大的區塊大小(參考configtx.yaml中AbsoluteMaxBytes參數)。
      # 每個區塊最大有Orderer.AbsoluteMaxBytes個位元組(不包括頭部),假定這裡設定的值為A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes應該設定一個大于A。
      # 為header增加一些緩沖區空間---1MB已經足夠大。上述不同設定值之間滿足如下關系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes應該嚴格小于socket.request.max.bytes的值,socket.request.max.bytes的值預設被設定為100MB。
      # 如果想要區塊的大小大于100MB,需要編輯fabric/orderer/kafka/config.go檔案裡寫死的值brokerConfig.Producer.MaxMessageBytes,
      # 修改後重新編譯源碼得到二進制檔案,這種設定是不建議的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 試圖為每個通道擷取的消息的位元組數。
      # 這不是絕對最大值,如果擷取的資訊大于這個值,則仍然會傳回資訊,以確定可以取得進展。
      # 代理所接受的最大消息大小是通過上一條message.max.bytes定義的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 資料一緻性在區塊鍊環境中是至關重要的。
      # 我們不能從in-sync 副本(ISR)集合之外選取channel leader,
      # 否則我們将會面臨對于之前的leader産生的offsets覆寫的風險,
      # 這樣的結果是,orderers産生的區塊可能會重新寫入區塊鍊。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service對Kafka日志的修剪增加支援,
      # 否則需要關閉基于時間的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在寫本文章時在Kafka中已經預設關閉,是以不需要再次明确設定這個配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - "9092:9092"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-kafka3.yaml 檔案内容和解釋如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我們使用K和Z分别代表Kafka叢集和ZooKeeper叢集的節點個數
# 
# 1)K的最小值應該被設定為4(我們将會在第4步中解釋,這是為了滿足crash容錯的最小節點數。
#    如果有4個代理,那麼可以容錯一個代理崩潰,一個代理停止服務後,channel仍然可以繼續讀寫,新的channel可以被建立)
# 2)Z可以為3,5或是7。它的值需要是一個奇數避免腦裂(split-brain)情況,同時選擇大于1的值為了避免單點故障。
#    超過7個ZooKeeper servers會被認為overkill。
#

version: '2'

services:

  kafka3:
    container_name: kafka3
    hostname: kafka3
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=3
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---設定一個M值(例如1<M<N,檢視下面的default.replication.factor)
      # 資料送出時會寫入至少M個副本(這些資料然後會被同步并且歸屬到in-sync 副本集合或ISR)。
      # 其它情況,寫入操作會傳回一個錯誤。接下來:
      # 1)如果channel寫入的資料多達N-M個副本變的不可用,操作可以正常執行。
      # 2)如果有更多的副本不可用,Kafka不可以維護一個有M數量的ISR集合,是以Kafka停止接收寫操作。Channel隻有當同步M個副本後才可以重新可以寫。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 設定一個值N,N<K。
      # 設定replication factor參數為N代表着每個channel都儲存N個副本的資料到Kafka的代理上。
      # 這些都是一個channel的ISR集合的候選。
      # 如同在上邊min.insync.replicas section設定部分所描述的,不是所有的代理(orderer)在任何時候都是可用的。
      # N的值必須小于K,如果少于N個代理的話,channel的建立是不能成功的。
      # 是以,如果設定N的值為K,一個代理失效後,那麼區塊鍊網絡将不能再建立新的channel---orderering service的crash容錯也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper節點的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中會設定最大的區塊大小(參考configtx.yaml中AbsoluteMaxBytes參數)。
      # 每個區塊最大有Orderer.AbsoluteMaxBytes個位元組(不包括頭部),假定這裡設定的值為A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes應該設定一個大于A。
      # 為header增加一些緩沖區空間---1MB已經足夠大。上述不同設定值之間滿足如下關系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes應該嚴格小于socket.request.max.bytes的值,socket.request.max.bytes的值預設被設定為100MB。
      # 如果想要區塊的大小大于100MB,需要編輯fabric/orderer/kafka/config.go檔案裡寫死的值brokerConfig.Producer.MaxMessageBytes,
      # 修改後重新編譯源碼得到二進制檔案,這種設定是不建議的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 試圖為每個通道擷取的消息的位元組數。
      # 這不是絕對最大值,如果擷取的資訊大于這個值,則仍然會傳回資訊,以確定可以取得進展。
      # 代理所接受的最大消息大小是通過上一條message.max.bytes定義的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 資料一緻性在區塊鍊環境中是至關重要的。
      # 我們不能從in-sync 副本(ISR)集合之外選取channel leader,
      # 否則我們将會面臨對于之前的leader産生的offsets覆寫的風險,
      # 這樣的結果是,orderers産生的區塊可能會重新寫入區塊鍊。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service對Kafka日志的修剪增加支援,
      # 否則需要關閉基于時間的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在寫本文章時在Kafka中已經預設關閉,是以不需要再次明确設定這個配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - "9092:9092"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-kafka4.yaml 檔案内容和解釋如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# 
# 我們使用K和Z分别代表Kafka叢集和ZooKeeper叢集的節點個數
# 
# 1)K的最小值應該被設定為4(我們将會在第4步中解釋,這是為了滿足crash容錯的最小節點數。
#    如果有4個代理,那麼可以容錯一個代理崩潰,一個代理停止服務後,channel仍然可以繼續讀寫,新的channel可以被建立)
# 2)Z可以為3,5或是7。它的值需要是一個奇數避免腦裂(split-brain)情況,同時選擇大于1的值為了避免單點故障。
#    超過7個ZooKeeper servers會被認為overkill。
#

version: '2'

services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      # ========================================================================
      #     Reference: https://kafka.apache.org/documentation/#configuration
      # ========================================================================
      #
      # broker.id
      - KAFKA_BROKER_ID=4
      #
      # min.insync.replicas
      # Let the value of this setting be M. Data is considered committed when
      # it is written to at least M replicas (which are then considered in-sync
      # and belong to the in-sync replica set, or ISR). In any other case, the
      # write operation returns an error. Then:
      # 1. If up to M-N replicas -- out of the N (see default.replication.factor
      # below) that the channel data is written to -- become unavailable,
      # operations proceed normally.
      # 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
      # of M, so it stops accepting writes. Reads work without issues. The
      # channel becomes writeable again when M replicas get in-sync.
      # 
      # min.insync.replicas = M---設定一個M值(例如1<M<N,檢視下面的default.replication.factor)
      # 資料送出時會寫入至少M個副本(這些資料然後會被同步并且歸屬到in-sync 副本集合或ISR)。
      # 其它情況,寫入操作會傳回一個錯誤。接下來:
      # 1)如果channel寫入的資料多達N-M個副本變的不可用,操作可以正常執行。
      # 2)如果有更多的副本不可用,Kafka不可以維護一個有M數量的ISR集合,是以Kafka停止接收寫操作。Channel隻有當同步M個副本後才可以重新可以寫。
      - KAFKA_MIN_INSYNC_REPLICAS=2
      #
      # default.replication.factor
      # Let the value of this setting be N. A replication factor of N means that
      # each channel will have its data replicated to N brokers. These are the
      # candidates for the ISR set of a channel. As we noted in the
      # min.insync.replicas section above, not all of these brokers have to be
      # available all the time. In this sample configuration we choose a
      # default.replication.factor of K-1 (where K is the total number of brokers in
      # our Kafka cluster) so as to have the largest possible candidate set for
      # a channel's ISR. We explicitly avoid setting N equal to K because
      # channel creations cannot go forward if less than N brokers are up. If N
      # were set equal to K, a single broker going down would mean that we would
      # not be able to create new channels, i.e. the crash fault tolerance of
      # the ordering service would be non-existent.
      # 
      # 設定一個值N,N<K。
      # 設定replication factor參數為N代表着每個channel都儲存N個副本的資料到Kafka的代理上。
      # 這些都是一個channel的ISR集合的候選。
      # 如同在上邊min.insync.replicas section設定部分所描述的,不是所有的代理(orderer)在任何時候都是可用的。
      # N的值必須小于K,如果少于N個代理的話,channel的建立是不能成功的。
      # 是以,如果設定N的值為K,一個代理失效後,那麼區塊鍊網絡将不能再建立新的channel---orderering service的crash容錯也就不存在了。
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      #
      # zookeper.connect
      # Point to the set of Zookeeper nodes comprising a ZK ensemble.
      # 指向Zookeeper節點的集合,其中包含ZK的集合。
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      #
      # zookeeper.connection.timeout.ms
      # The max time that the client waits to establish a connection to
      # Zookeeper. If not set, the value in zookeeper.session.timeout.ms (below)
      # is used.
      #- KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS = 6000
      #
      # zookeeper.session.timeout.ms
      #- KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS = 6000
      #
      # socket.request.max.bytes
      # The maximum number of bytes in a socket request. ATTN: If you set this
      # env var, make sure to update `brokerConfig.Producer.MaxMessageBytes` in
      # `newBrokerConfig()` in `fabric/orderer/kafka/config.go` accordingly.
      #- KAFKA_SOCKET_REQUEST_MAX_BYTES=104857600 # 100 * 1024 * 1024 B
      #
      # message.max.bytes
      # The maximum size of envelope that the broker can receive.
      # 
      # 在configtx.yaml中會設定最大的區塊大小(參考configtx.yaml中AbsoluteMaxBytes參數)。
      # 每個區塊最大有Orderer.AbsoluteMaxBytes個位元組(不包括頭部),假定這裡設定的值為A(目前99)。
      # message.max.bytes和replica.fetch.max.bytes應該設定一個大于A。
      # 為header增加一些緩沖區空間---1MB已經足夠大。上述不同設定值之間滿足如下關系:
      # Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes
      # (更完整的是,message.max.bytes應該嚴格小于socket.request.max.bytes的值,socket.request.max.bytes的值預設被設定為100MB。
      # 如果想要區塊的大小大于100MB,需要編輯fabric/orderer/kafka/config.go檔案裡寫死的值brokerConfig.Producer.MaxMessageBytes,
      # 修改後重新編譯源碼得到二進制檔案,這種設定是不建議的。)
      - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # replica.fetch.max.bytes
      # The number of bytes of messages to attempt to fetch for each channel.
      # This is not an absolute maximum, if the fetched envelope is larger than
      # this value, the envelope will still be returned to ensure that progress
      # can be made. The maximum message size accepted by the broker is defined
      # via message.max.bytes above.
      # 
      # 試圖為每個通道擷取的消息的位元組數。
      # 這不是絕對最大值,如果擷取的資訊大于這個值,則仍然會傳回資訊,以確定可以取得進展。
      # 代理所接受的最大消息大小是通過上一條message.max.bytes定義的。
      - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
      #
      # unclean.leader.election.enable
      # Data consistency is key in a blockchain environment. We cannot have a
      # leader chosen outside of the in-sync replica set, or we run the risk of
      # overwriting the offsets that the previous leader produced, and --as a
      # result-- rewriting the blockchain that the orderers produce.
      # 資料一緻性在區塊鍊環境中是至關重要的。
      # 我們不能從in-sync 副本(ISR)集合之外選取channel leader,
      # 否則我們将會面臨對于之前的leader産生的offsets覆寫的風險,
      # 這樣的結果是,orderers産生的區塊可能會重新寫入區塊鍊。
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      #
      # log.retention.ms
      # Until the ordering service in Fabric adds support for pruning of the
      # Kafka logs, time-based retention should be disabled so as to prevent
      # segments from expiring. (Size-based retention -- see
      # log.retention.bytes -- is disabled by default so there is no need to set
      # it explicitly.)
      # 
      # 除非orderering service對Kafka日志的修剪增加支援,
      # 否則需要關閉基于時間的日志保留方式并且避免分段到期
      # (基于大小的日志保留方式log.retention.bytes在寫本文章時在Kafka中已經預設關閉,是以不需要再次明确設定這個配置)。
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
    ports:
      - "9092:9092"
    extra_hosts:
     - "zookeeper1:172.31.159.137"
     - "zookeeper2:172.31.159.135"
     - "zookeeper3:172.31.159.136"
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

Kafka 預設端口為 9092。

Kafka的最小值應該被設定為4,這是為了滿足Crash容錯的最小節點數。如果有4個代理,則可以容錯一個代理奔潰,即一個代理停止服務後,Channel 仍然可以繼續讀寫,新的Channel可以被建立。

5.3.5 Orderer配置

Orderer有三份配置檔案:docker-orderer0.yaml 、 docker-orderer1.yaml 、 docker-orderer2.yaml

docker-orderer0.yaml 檔案内容配置如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer0.example.com:
    container_name: orderer0.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[172.31.159.131:9092,172.31.159.132:9092,172.31.159.133:9092,172.31.159.134:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-orderer1.yaml 檔案内容配置如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer1.example.com:
    container_name: orderer1.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[172.31.159.131:9092,172.31.159.132:9092,172.31.159.133:9092,172.31.159.134:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

docker-orderer2.yaml 檔案内容配置如下:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  orderer2.example.com:
    container_name: orderer2.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s 
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s 
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[172.31.159.131:9092,172.31.159.132:9092,172.31.159.133:9092,172.31.159.134:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - aberic
    ports:
      - 7050:7050
    extra_hosts:
     - "kafka1:172.31.159.133"
     - "kafka2:172.31.159.132"
     - "kafka3:172.31.159.134"
     - "kafka4:172.31.159.131"
           

參數解釋:

CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE : 用來建立Docker容器的參數

ORDERER_GENERAL_LOGLEVEL : 設定目前程式的日志級别,目前為debug友善調試。生産環境中應該設定為error等較進階别。

ORDERER_GENERAL_GENESISMETHOD : 告知關于本 Fabric 網絡的創世區塊被包含在一個檔案資訊中

ORDERER_GENERAL_GENESISFILE : 指定創世區塊的确切路徑。

ORDERER_GENERAL_LOCALMSPID : 在crypto-config.yaml檔案中定義的 MSP 的 ID

ORDERER_GENERAL_LOCALMSPDIR : MSP ID 目錄的路徑

ORDERER_GENERAL_TLS_ENABLED : 表示是否啟用TLS

ORDERER_GENERAL_TLS_PRIVATEKEY : 私鑰檔案位置

ORDERER_GENERAL_TLS_CERTIFICATE : 證書位置

ORDERER_GENERAL_TLS_ROOTCAS : TLS 根證書的位置

ORDERER_KAFKA_RETRY_LONGINTERVAL : 表示每間隔最大多長時間進行一次重試

ORDERER_KAFKA_RETRY_LONGTOTAL : 表示總共重試最長時長

ORDERER_KAFKA_RETRY_SHORTINTERVAL :表示每間隔最小多長時間進行一次重試

ORDERER_KAFKA_RETRY_SHORTTOTAL : 表示總共重試最短時長

ORDERER_KAFKA_VERBOSE : 表示啟用日志與Kafka進行互動

ORDERER_KAFKA_BROKERS : 指向Kafka的集合,包括自身

working_dir : 用于設定Orderer排序服務的工作路徑

volumes : 表示為了映射環境配置中使用的目錄,說明了MSP 、 TLS 、ROOT 和 CERT 檔案的位置,其中還包括了創世區塊的資訊

5.4 啟動叢集

Kafka叢集啟動順序應該是由上至下,即根叢集必須優先啟動:先啟動Zookeeper叢集,随後是Kafka叢集,最後是Orderer排序服務叢集。

5.4.1 啟動Zookeeper叢集

分别将docker-zookeeper1.yaml , docker-zookeeper2.yaml , docker-zookeerper3.yaml 上傳到ZK1 , ZK2 ,ZK3 伺服器上自定義位置 。

說明:

Zookeeper服務不需要部署GO和Fabric環境。為便于清晰了解和操作,可以同樣建立一個 /home/zyp/development/go/src/github.com/hyperledger/fabric/aberic

目錄。

參考 fabric網絡部署的檔案位置

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

上傳完畢後,在ZK1,ZK2,ZK3分别執行如下指令:

docker-compose -f docker-zookeeper1.yaml up  //ZK1
docker-compose -f docker-zookeeper2.yaml up  //ZK2
docker-compose -f docker-zookeeper3.yaml up  //ZK3	
           

5.4.2 啟動Kafka叢集

同上,建立一個熟悉的路徑(…/fabric/aberic目錄),将docker-kafka.yaml 四個配置檔案分别上傳到Kafka1 , Kafka2, Kafka3 , Kafka4 伺服器。

分别在對應伺服器執行對應指令啟動:

docker-compose -f docker-kafka1.yaml up    // Kafka1伺服器
docker-compose -f docker-kafka2.yaml up    // Kafka2伺服器
docker-compose -f docker-kafka3.yaml up    // Kafka3伺服器
docker-compose -f docker-kafka4.yaml up    // Kafka4伺服器
           
可能遇到的問題:

"OpenJDK 64-Bit Server VM warning : INFO : os :commit_memory "

Kafka中heap-opts 預設是1G, 如果你的測試伺服器配置小于1G,就會報如上的錯

解決方案:
在Kafka配置檔案中環境變量裡加入如下參數
- KAFKA_HEAP_OPTS = -Xmx256M -Xms128M
           

5.5.3 啟動Orderer叢集

同上,分别上傳各自的配置檔案docker-orderer0.yaml , docker-oderer1.yaml , docker-orderer2.yaml 到 Orderer0 , Orderer1 , Orderer2 伺服器 的 aberic 目錄下。(自行建立)

将 第3章 部署單機多節點網絡 中 生成的genesis.block 創世區塊檔案(下圖所示),分别上傳至各Orderer伺服器 …/aberic/channel-artifacts 目錄下。(沒有手動建立即可)

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

還需上傳 crypto-config.yaml 配置檔案和 crypto-config 檔案夾下的 ordererOrganizations 整個上傳至 各Orderer伺服器 …/aberic/crypto-config 目錄下。(手動建立該目錄即可)

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署
所需檔案 在 第3章 部署單機多節點網絡 都生成和配置過,直接複制過來用

所需檔案準備完後,在各自伺服器分别執行啟動指令:

docker-compose -f docker-orderer0.yaml up -d   // Orderer0 啟動指令
docker-compose -f docker-orderer1.yaml up -d   // Orderer1 啟動指令
docker-compose -f docker-orderer2.yaml up -d   // Orderer2 啟動指令
           

Orderer啟動的時候,建立了一個名為testchainid的系統Channel。排序服務啟動後看到如下日志即為啟動成功

2020-02-02 18:58:44.571 CST [orderer.consensus.kafka] startThread -> INFO 011 [channel: testchainid] Channel consumer set up successfully
2020-02-02 18:58:44.571 CST [orderer.consensus.kafka] startThread -> INFO 012 [channel: testchainid] Start phase completed successfully
           

5.5 叢集環境測試

準備docker-peer0org1.yaml配置檔案

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - "5984:5984"

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/dbb4538c1dacb57bdca5d39bdaf0066a98826bebb47b86a05d18972db5876d1e_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/dbb4538c1dacb57bdca5d39bdaf0066a98826bebb47b86a05d18972db5876d1e_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=172.31.159.129:5984

      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
     - "orderer0.example.com:172.31.159.130"
     - "orderer1.example.com:172.31.143.22"
     - "orderer2.example.com:172.31.143.23"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected].example.com/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - peer0.org1.example.com
    extra_hosts:
     - "orderer0.example.com:172.31.159.130"
     - "orderer1.example.com:172.31.143.22"
     - "orderer2.example.com:172.31.143.23"
     - "peer0.org1.example.com:172.31.159.129"
           

将該檔案上傳至peer0伺服器…/aberic目錄下

将在 第3章 部署單機多節點網絡 中生成的mychnnel.tx 檔案(下圖中檔案)上傳到 …/aberic/channel-artifacts 目錄下

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

将在 第3章 部署單機多節點網絡 中生成的peerOrganizations 檔案夾(下圖中檔案)上傳到 …/aberic/crypto-config目錄下(手動建立路徑即可), 且僅上傳org1相關的。

HyperLedger Fabric開發實戰 -Kafka叢集部署第5章 Kafka叢集部署

随後啟動peer節點服務:

docker-compose -f docker-peer0org1.yaml up -d
           

建立頻道:

peer channel create -o orderer0.example.com:7050 -c mychnnel -t 50 -f ./channel-artifacts/mychannel.tx
           

執行如下指令加入該頻道:

peer channel join -b mychnnel.block
           
這裡測試步驟和指令 與 第3章 部署單機多節點網絡 的測試基本一緻,都是通過轉賬查詢來驗證。有問題可以參考

安裝智能合約:

peer chaincode install -n mycc -p github.com/hyperledger/fabric/aberic/chaincode/go/chaincode_example02 -v 1.0
           

執行個體化:

peer chaincode instantiate -o orderer0.example.com:7050 -C mychnnel -n mycc -c '{"Args":["init","A","10","B","10"]}' -P  "OR ('Org1MSP.member','Org2MSP.member')" -v 1.0
           

查詢:

輸出: Query Result: 10

繼續閱讀