相關内容:
Kubernetes部署(一):架構及功能說明
Kubernetes部署(二):系統環境初始化
Kubernetes部署(三):CA憑證制作
Kubernetes部署(四):ETCD叢集部署
Kubernetes部署(五):Haproxy、Keppalived部署
Kubernetes部署(六):Master節點部署
Kubernetes部署(七):Node節點部署
Kubernetes部署(八):Flannel網絡部署
Kubernetes部署(九):CoreDNS、Dashboard、Ingress部署
Kubernetes部署(十):儲存之glusterfs和heketi部署
Kubernetes部署(十一):管理之Helm和Rancher部署
Kubernetes部署(十二):helm部署harbor企業級鏡像倉庫
概觀
本指南支援在Kubernetes叢集中內建,部署和管理GlusterFS容器化存儲節點。這使Kubernetes管理者能夠為其使用者提供可靠的共享存儲。
包括設定指南、其中包含一個示例伺服器pod,它使用動态配置的GlusterFS卷進行存儲。對于那些希望測試或了解有關此主題的更多資訊的人,請按照主要自述檔案中的快速入門說明 了解gluster-kubernetes
本指南旨在示範Heketi在Kubernetes環境中管理Gluster的最小示例。
基礎設施要求
- 一個正在運作的Kubernetes叢集,至少有三個Kubernetes工作節點,每個節點至少連接配接一個可用的原始塊裝置(如EBS卷或本地磁盤)。
#使用file -s 檢視硬碟如果顯示為data則為原始塊裝置。如果不是data類型,可先用pvcreate,pvremove來變更。 [root@node-04 ~]# file -s /dev/sdc /dev/sdc: x86 boot sector, code offset 0xb8 [root@node-04 ~]# pvcreate /dev/sdc WARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n]: y Wiping dos signature on /dev/sdc. Physical volume "/dev/sdc" successfully created. [root@node-04 ~]# pvremove /dev/sdc Labels on physical volume "/dev/sdc" successfully wiped. [root@node-04 ~]# file -s /dev/sdc /dev/sdc: data
- 在glusterfs節點的主控端需要安裝glusterfs-client、glusterfs-fuse包和socat包。
yum install -y glusterfs-client glusterfs-fuse socat
- 每個kubetnetes節點的主控端需要加載dm_thin_pool子產品
modprobe dm_thin_pool
用戶端安裝
Heketi提供CLI,為使用者提供管理Kubernetes中GlusterFS的部署和配置的方法。 在您的用戶端計算機上下載下傳并安裝下載下傳并安裝heketi-cli,下載下傳的heketi-cli版本最好是和heketi服務端版本一緻,不然可能會出現報錯。
Kubernetes部署
- 部署GlusterFS DaemonSet
{ "kind": "DaemonSet", "apiVersion": "extensions/v1beta1", "metadata": { "name": "glusterfs", "labels": { "glusterfs": "deployment" }, "annotations": { "description": "GlusterFS Daemon Set", "tags": "glusterfs" } }, "spec": { "template": { "metadata": { "name": "glusterfs", "labels": { "glusterfs-node": "daemonset" } }, "spec": { "nodeSelector": { "storagenode" : "glusterfs" }, "hostNetwork": true, "containers": [ { "image": "gluster/gluster-centos:latest", "imagePullPolicy": "Always", "name": "glusterfs", "volumeMounts": [ { "name": "glusterfs-heketi", "mountPath": "/var/lib/heketi" }, { "name": "glusterfs-run", "mountPath": "/run" }, { "name": "glusterfs-lvm", "mountPath": "/run/lvm" }, { "name": "glusterfs-etc", "mountPath": "/etc/glusterfs" }, { "name": "glusterfs-logs", "mountPath": "/var/log/glusterfs" }, { "name": "glusterfs-config", "mountPath": "/var/lib/glusterd" }, { "name": "glusterfs-dev", "mountPath": "/dev" }, { "name": "glusterfs-cgroup", "mountPath": "/sys/fs/cgroup" } ], "securityContext": { "capabilities": {}, "privileged": true }, "readinessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 60, "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] } }, "livenessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 60, "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] } } } ], "volumes": [ { "name": "glusterfs-heketi", "hostPath": { "path": "/var/lib/heketi" } }, { "name": "glusterfs-run" }, { "name": "glusterfs-lvm", "hostPath": { "path": "/run/lvm" } }, { "name": "glusterfs-etc", "hostPath": { "path": "/etc/glusterfs" } }, { "name": "glusterfs-logs", "hostPath": { "path": "/var/log/glusterfs" } }, { "name": "glusterfs-config", "hostPath": { "path": "/var/lib/glusterd" } }, { "name": "glusterfs-dev", "hostPath": { "path": "/dev" } }, { "name": "glusterfs-cgroup", "hostPath": { "path": "/sys/fs/cgroup" } } ] } } } }
$ kubectl create -f glusterfs-daemonset.json
- 通過運作擷取節點名稱:
$ kubectl get nodes
- 通過
在該節點上設定标簽,将gluster容器部署到指定節點上。storagenode=glusterfs
[root@node-01 heketi]# kubectl label node 10.31.90.204 storagenode=glusterfs
[root@node-01 heketi]# kubectl label node 10.31.90.205 storagenode=glusterfs
[root@node-01 heketi]# kubectl label node 10.31.90.206 storagenode=glusterfs
根據需要,驗證pod正在節點上運作,至少應運作三個pod。
$ kubectl get pods
- 接下來我們将為Heketi建立一個ServiceAccount:
{ "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "heketi-service-account" } }
$ kubectl create -f heketi-service-account.json
- 我們現在必須建立該服務帳戶控制gluster pod的能力。我們通過為新建立的服務帳戶建立叢集角色綁定來實作此目的。
$ kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
- 現在我們需要建立一個Kubernetes secret,它将儲存我們的Heketi執行個體的配置。必須将配置檔案設定為使用 kubernetes執行程式,以便Heketi伺服器控制gluster pod。除此之外,您可以随意嘗試配置選項。
{
"_port_comment": "Heketi Server Port Number",
"port": "8080",
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "My Secret"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "My Secret"
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes",
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"kubeexec": {
"rebalance_on_expansion": true
},
"sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "22",
"user": "root",
"sudo": false
}
},
"_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}
$ kubectl create secret generic heketi-config-secret --from-file=./heketi.json
-
接下來,我們需要部署一個初始Pod和一個服務來通路該pod。如下會有一個heketi-bootstrap.json檔案。
送出檔案并驗證一切正常運作,如下所示:
{ "kind": "List", "apiVersion": "v1", "items": [ { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "deploy-heketi", "labels": { "glusterfs": "heketi-service", "deploy-heketi": "support" }, "annotations": { "description": "Exposes Heketi Service" } }, "spec": { "selector": { "name": "deploy-heketi" }, "ports": [ { "name": "deploy-heketi", "port": 8080, "targetPort": 8080 } ] } }, { "kind": "Deployment", "apiVersion": "extensions/v1beta1", "metadata": { "name": "deploy-heketi", "labels": { "glusterfs": "heketi-deployment", "deploy-heketi": "deployment" }, "annotations": { "description": "Defines how to deploy Heketi" } }, "spec": { "replicas": 1, "template": { "metadata": { "name": "deploy-heketi", "labels": { "name": "deploy-heketi", "glusterfs": "heketi-pod", "deploy-heketi": "pod" } }, "spec": { "serviceAccountName": "heketi-service-account", "containers": [ { "image": "heketi/heketi:8", "imagePullPolicy": "Always", "name": "deploy-heketi", "env": [ { "name": "HEKETI_EXECUTOR", "value": "kubernetes" }, { "name": "HEKETI_DB_PATH", "value": "/var/lib/heketi/heketi.db" }, { "name": "HEKETI_FSTAB", "value": "/var/lib/heketi/fstab" }, { "name": "HEKETI_SNAPSHOT_LIMIT", "value": "14" }, { "name": "HEKETI_KUBE_GLUSTER_DAEMONSET", "value": "y" } ], "ports": [ { "containerPort": 8080 } ], "volumeMounts": [ { "name": "db", "mountPath": "/var/lib/heketi" }, { "name": "config", "mountPath": "/etc/heketi" } ], "readinessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 3, "httpGet": { "path": "/hello", "port": 8080 } }, "livenessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 30, "httpGet": { "path": "/hello", "port": 8080 } } } ], "volumes": [ { "name": "db" }, { "name": "config", "secret": { "secretName": "heketi-config-secret" } } ] } } } } ] }
# kubectl create -f heketi-bootstrap.json
service "deploy-heketi" created
deployment "deploy-heketi" created
[root@node-01 heketi]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deploy-heketi-8888799fd-cmfp6 1/1 Running 0 6m
glusterfs-7t5ls 1/1 Running 0 8m
glusterfs-drsx9 1/1 Running 0 8m
glusterfs-pnnn8 1/1 Running 0 8m
- 現在Bootstrap Heketi服務正在運作,我們将配置端口轉發,以便我們可以使用Heketi CLI與服務進行通信。使用Heketi pod的名稱,運作以下指令:
kubectl port-forward deploy-heketi-8888799fd-cmfp6 :8080
如果在運作指令的系統上本地端口8080空閑,則可以運作port-forward指令,以便它為了友善而綁定到8080:
kubectl port-forward deploy-heketi-8888799fd-cmfp6 18080:8080
現在通過對Heketi服務運作示例查詢來驗證端口轉發是否正常工作。該指令應該列印将要轉發的本地端口。将其合并到URL中以測試服務,如下所示:
curl http://localhost:18080/hello
Handling connection for 18080
Hello from Heketi
最後,為Heketi CLI用戶端設定環境變量,以便它知道如何到達Heketi Server。
export HEKETI_CLI_SERVER=http://localhost:18080
- 接下來,我們将向Heketi提供有關要管理的GlusterFS叢集的資訊。我們通過拓撲檔案提供此資訊 。您克隆的repo中有一個示例拓撲檔案,名為topology-sample.json。拓撲指定運作GlusterFS容器的Kubernetes節點以及每個節點的相應原始塊裝置。
- 確定hostnames/manage指向下面顯示的确切名稱kubectl get nodes,并且hostnames/storage是存儲網絡的IP位址。
- 重要資訊:此時,必須使用與伺服器版本比對的heketi-cli版本加載拓撲檔案。作為最後的手段,Heketi容器附帶了一份可以通過的方式通路的heketi-cli kubectl exec ...。
修改拓撲檔案以反映您所做的選擇,然後部署它,如下所示:
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"10.31.90.204"
],
"storage":[
"10.31.90.204"
]
},
"zone": 1
},
"devices": [
"/dev/sdc"
]
},
{
"node": {
"hostnames": {
"manage": [
"10.31.90.205"
],
"storage":[
"10.31.90.205"
]
},
"zone": 1
},
"devices": [
"/dev/sdc"
]
},
{
"node": {
"hostnames": {
"manage": [
"10.31.90.206"
],
"storage":[
"10.31.90.206"
]
},
"zone": 1
},
"devices": [
"/dev/sdc"
]
}
]
}
]
}
[root@node-01 ~]# heketi-cli topology load --json=top.json
Creating cluster ... ID: e758afb77ee26d5f969d7efee1516e64
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 10.31.90.204 ... ID: a6eedd58c118dcfe44a0db2af1a4f863
Adding device /dev/sdc ... OK
Creating node 10.31.90.205 ... ID: 4066962c14bcdebd28aca193b5690792
Adding device /dev/sdc ... OK
Creating node 10.31.90.206 ... ID: 91e42a2361f0266ae334354e5c34ce11
Adding device /dev/sdc ... OK
- 接下來我們将使用Heketi為它配置一個卷來存儲其資料庫:
執行此指令後會生成一個heketi-storage.json的檔案,我們最好是将此檔案裡的
"image": "heketi/heketi:dev"
改為
"image": "heketi/heketi:8"
# heketi-client/bin/heketi-cli setup-openshift-heketi-storage
然後在建立heketi相關服務
# kubectl create -f heketi-storage.json
陷阱:如果heketi-cli在運作setup-openshift-heketi-storage子指令時報告“無空間”錯誤,則可能無意中運作topology load了伺服器和heketi-cli的不比對版本。停止正在運作的Heketi pod(kubectl scale deployment deploy-heketi --replicas=0),手動從存儲塊裝置中删除任何簽名,然後繼續運作Heketi pod(kubectl scale deployment deploy-heketi --replicas=1)。然後使用比對版本的heketi-cli重新加載拓撲并重試該步驟。
- 等到作業完成然後删除引導程式Heketi:
# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
- 建立長期Heketi執行個體:
{ "kind": "List", "apiVersion": "v1", "items": [ { "kind": "Secret", "apiVersion": "v1", "metadata": { "name": "heketi-db-backup", "labels": { "glusterfs": "heketi-db", "heketi": "db" } }, "data": { }, "type": "Opaque" }, { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "heketi", "labels": { "glusterfs": "heketi-service", "deploy-heketi": "support" }, "annotations": { "description": "Exposes Heketi Service" } }, "spec": { "selector": { "name": "heketi" }, "ports": [ { "name": "heketi", "port": 8080, "targetPort": 8080 } ] } }, { "kind": "Deployment", "apiVersion": "extensions/v1beta1", "metadata": { "name": "heketi", "labels": { "glusterfs": "heketi-deployment" }, "annotations": { "description": "Defines how to deploy Heketi" } }, "spec": { "replicas": 1, "template": { "metadata": { "name": "heketi", "labels": { "name": "heketi", "glusterfs": "heketi-pod" } }, "spec": { "serviceAccountName": "heketi-service-account", "containers": [ { "image": "heketi/heketi:8", "imagePullPolicy": "Always", "name": "heketi", "env": [ { "name": "HEKETI_EXECUTOR", "value": "kubernetes" }, { "name": "HEKETI_DB_PATH", "value": "/var/lib/heketi/heketi.db" }, { "name": "HEKETI_FSTAB", "value": "/var/lib/heketi/fstab" }, { "name": "HEKETI_SNAPSHOT_LIMIT", "value": "14" }, { "name": "HEKETI_KUBE_GLUSTER_DAEMONSET", "value": "y" } ], "ports": [ { "containerPort": 8080 } ], "volumeMounts": [ { "mountPath": "/backupdb", "name": "heketi-db-secret" }, { "name": "db", "mountPath": "/var/lib/heketi" }, { "name": "config", "mountPath": "/etc/heketi" } ], "readinessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 3, "httpGet": { "path": "/hello", "port": 8080 } }, "livenessProbe": { "timeoutSeconds": 3, "initialDelaySeconds": 30, "httpGet": { "path": "/hello", "port": 8080 } } } ], "volumes": [ { "name": "db", "glusterfs": { "endpoints": "heketi-storage-endpoints", "path": "heketidbstorage" } }, { "name": "heketi-db-secret", "secret": { "secretName": "heketi-db-backup" } }, { "name": "config", "secret": { "secretName": "heketi-config-secret" } } ] } } } } ] }
# kubectl create -f heketi-deployment.json
service "heketi" created
deployment "heketi" created
- 現在這樣做了,Heketi資料庫将保留在GlusterFS卷中,并且每次重新開機Heketi pod時都不會重置。
使用諸如
heketi-cli cluster list
和之類的指令
heketi-cli volume list
來确認先前建立的叢集是否存在,以及Heketi是否知道在引導階段建立的db存儲卷。
示範測試
-
接下來就是建立存儲卷,然後挂載測試。
在測試之前我們需要先将heketi服務通過Ingress對外釋出,将
的heketi.cnlinux.club
解析為A記錄
。10.31.90.200
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-heketi annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx spec: rules: - host: heketi.cnlinux.club http: paths: - path: backend: serviceName: heketi servicePort: 8080
在浏覽器通路http://heketi.cnlinux.club/hello[root@node-01 heketi]# kubectl create -f ingress-heketi.yaml
Kubernetes部署(十):儲存之glusterfs和heketi部署 - 建立StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi.cnlinux.club"
restauthenabled: "false"
volumetype: "replicate:2"
[root@node-01 heketi]# kubectl create -f storageclass-gluster-heketi.yaml
[root@node-01 heketi]# kubectl get sc
NAME PROVISIONER AGE
gluster-heketi kubernetes.io/glusterfs 10s
- 建立pvc
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-gluster-heketi spec: storageClassName: gluster-heketi accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
[root@node-01 heketi]# kubectl create -f pvc-gluster-heketi.yaml [root@node-01 heketi]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-gluster-heketi Bound pvc-d978f524-0b74-11e9-875c-005056826470 1Gi RWO gluster-heketi 30s
- 在pod裡挂載pvc
apiVersion: v1 kind: Pod metadata: name: pod-pvc spec: containers: - name: pod-pvc image: busybox:latest command: - sleep - "3600" volumeMounts: - name: gluster-volume mountPath: "/pv-data" volumes: - name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi
進入容器檢視是否已經挂載成功[root@node-01 heketi]# kubectl create -f pod-pvc.yaml
[root@node-01 heketi]# kubectl exec pod-pvc -it /bin/sh / # df -h Filesystem Size Used Available Use% Mounted on overlay 47.8G 4.3G 43.5G 9% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup 10.31.90.204:vol_675cc9fe0e959157919c886ea7786d33 1014.0M 42.7M 971.3M 4% /pv-data /dev/sda3 47.8G 4.3G 43.5G 9% /dev/termination-log /dev/sda3 47.8G 4.3G 43.5G 9% /etc/resolv.conf /dev/sda3 47.8G 4.3G 43.5G 9% /etc/hostname /dev/sda3 47.8G 4.3G 43.5G 9% /etc/hosts
/ # cd /pv-data/ /pv-data # dd if=/dev/zero of=/pv-data/test.img bs=8M count=300 123+0 records in 122+0 records out 1030225920 bytes (982.5MB) copied, 24.255925 seconds, 40.5MB/s
[root@node-04 cfg]# mount /dev/vg_2631413b8b87bbd6cb526568ab697d37/brick_1691ef862dd504e12e8384af76e5a9f2 /mnt [root@node-04 cfg]# ll -h /mnt/brick/ total 982M -rw-r--r-- 2 root 2001 982M Jan 2 15:14 test.img