天天看點

ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇

最終環境與能學到什麼說明:

使用kubeadm,配置阿裡雲的鏡像源搭建單節點的k8s環境。

使用ceph-deploy搭建 1 mon 1 mgr 3 osd 環境的ceph叢集。

k8s 調用 ceph 叢集的 rbd,cephfs 作為存儲後端。

docker harbor 使用 ceph 對象存儲,儲存鏡像。

學習環境機器規劃:

注:筆記本至少有8G記憶體,少于8G記憶體的機器運作有點吃力。

k8s單節點

ceph(admin node1 ceph-client)

192.168.8.138

CentOS 7.x 2c/6G 挂載一塊硬碟(不少于20G)

k8s: v1.18.6

ceph: 12.2.13 luminous (stable)

node2 192.168.8.139

CentOS 7.x 2c/700M 挂載一塊硬碟(不少于20G)

ceph: 12.2.13 luminous (stable)

node3 192.168.8.140

CentOS 7.x 2c/700M 挂載一塊硬碟(不少于20G)

ceph: 12.2.13 luminous (stable)

ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇
ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇
ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇

kubeadm 安裝 單節點k8s篇 (操作機器138)

設定時區,主機名,時間同步,關閉防火牆,sawp,selinux。

timedatectl set-timezone 'Asia/Shanghai'
ntpdate ntp1.aliyun.com
# 這裡填寫你本機IP
hostnamectl set-hostname node1 && echo "192.168.8.138 node1" >> /etc/hosts
systemctl stop firewalld.service && systemctl disable firewalld.service
setenforce 0 && swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
           

添加yum源

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
           

安裝docker依賴軟體

yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-engine-y
yum install yum-utils device-mapper-persistent-data lvm2 -y
           

安裝docker

### Add Docker repository.
yum-config-manager \
  --add-repo \
  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

## Install Docker CE. 這一步需要等待很久,如有報錯,請重複執行. 
yum update -y && yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.11 \
  docker-ce-cli-19.03.11


# Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://e6vlzg9v.mirror.aliyuncs.com"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
           

安裝kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

echo 1 > /proc/sys/net/ipv4/ip_forward

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system
yum install -y kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
systemctl enable kubelet && systemctl start kubelet
           

初始化 K8S

# advertise-address 修改為本機IP
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.8.138 --kubernetes-version 1.18.6 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all -v 5


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

添加網絡元件 

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
           

測試k8s功能是否正常

kubectl taint node node1 node-role.kubernetes.io/master-
kubectl run test --image=nginx -l test=test
kubectl expose pod test --port=80 --target-port=80 --type=NodePort
kubectl get service
           
ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇
ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇

安裝Ceph篇

設定主機名,時區,同步時間,關閉防火牆(全部機器操作)

#138
hostnamectl set-hostname node1
#139
hostnamectl set-hostname node2
#140
hostnamectl set-hostname node3
#以下是全部機器
#編輯hosts檔案
192.168.8.138 admin node1 ceph-client
192.168.8.139 node2
192.168.8.140 node3
# all
timedatectl set-timezone 'Asia/Shanghai'
ntpdate ntp1.aliyun.com
systemctl stop firewalld.service && systemctl disable firewalld.service
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
           

建立業務使用者,設定sudo權限,yum 源

# 所有機器
useradd cephu
passwd cephu
# 所有機器
visudo  ----在root ALL=(ALL) ALL下面添加:
cephu ALL=(root) NOPASSWD:ALL
# 所有機器
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo



# 138 admin 機器
# vim /etc/profile
export CEPH_DEPLOY_REPO_URL=http://mirrors.aliyun.com/ceph/rpm-luminous/el7
export CEPH_DEPLOY_GPG_URL=http://mirrors.aliyun.com/ceph/keys/release.asc
# vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
           

admin機器設定業務使用者能免密登陸其他機器,安裝ceph-deploy

# 138 admin 機器
yum install ceph-deploy -y
# 138 admin 機器
su - cephu
ssh-keygen
ssh-copy-id [email protected]
ssh-copy-id [email protected]
ssh-copy-id [email protected]
           

初始化ceph叢集,以下ceph-deploy操作全部在admin節點mycluster檔案夾下操作

# 138 機器 以下ceph-deploy操作全部在admin節點mycluster檔案夾下操作
su - cephu
mkdir my-cluster
cd my-cluster/
# 編輯 ceph-deploy 配置檔案
vim ~/.ssh/config
# 加以下内容
Host node1
Hostname node1
User cephu

Host node2
Hostname node2
User cephu

Host node3
Hostname node3
User cephu


# 
chmod 644 ~/.ssh/config
wget https://files.pythonhosted.org/packages/5f/ad/1fde06877a8d7d5c9b60eff7de2d452f639916ae1d48f0b8f97bf97e570a/distribute-0.7.3.zip
unzip distribute-0.7.3.zip
cd distribute-0.7.3/
sudo python setup.py install
cd ~/my-cluster
ceph-deploy new node1
ceph-deploy install --release luminous node1 node2 node3
ceph-deploy mon create-initial
ceph-deploy admin node1 node2 node3
ceph-deploy mgr create node1
ceph-deploy osd create --data /dev/sdb node1
ceph-deploy osd create --data /dev/sdb node2
ceph-deploy osd create --data /dev/sdb node3
(報錯“error: GPT headers found, they must be removed on: /dev/sdb”,使用“# sgdisk --zap-all /dev/sdb”解決)
sudo ceph auth get-or-create mgr.node1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
sudo ceph-mgr -i node1
sudo ceph status
sudo ceph mgr module enable dashboard
sudo ceph config-key set mgr/dashboard/node1/server_addr 192.168.8.138
sudo netstat -nltp | grep 7000


           

驗證ceph dashboard, 浏覽器打開web頁面

ceph 與 k8s的簡單結合使用筆記安裝Ceph篇k8s 調用 Ceph 篇

k8s 調用 Ceph 篇

參考文檔:https://blog.51cto.com/leejia/2501080?hmsr=joyk.com&utm_source=joyk.com&utm_medium=referral

靜态持久卷

每次需要使用存儲空間,需要存儲管理者先手動在存儲上建立好對應的image,然後k8s才能使用。

建立ceph secret

需要給k8s添加一個通路ceph的secret,主要用于k8s來給rbd做map。

1,在ceph master節點執行如下指令擷取admin的經過base64編碼的key(生産環境可以建立一個給k8s使用的專門使用者):

# ceph auth get-key client.admin | base64
QVFCd3BOQmVNMCs5RXhBQWx3aVc3blpXTmh2ZjBFMUtQSHUxbWc9PQ==
           

2,在k8s通過manifest建立secret

# vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFCd3BOQmVNMCs5RXhBQWx3aVc3blpXTmh2ZjBFMUtQSHUxbWc9PQ==

# kubectl apply -f ceph-secret.yaml 
           

建立存儲池

列出目前的存儲池
ceph osd lspools
建立存儲池
ceph osd pool create mypool 128 128
初始化
rbd pool init mypool
           

建立image

預設情況下,ceph建立之後使用的預設pool為mypool。使用如下指令在安裝ceph的用戶端或者直接在ceph master節點上建立image:

# rbd create  mypool/image1 --size 1024 --image-feature layering
# rbd info mypool/image1
rbd image 'image1':
	size 1GiB in 256 objects
	order 22 (4MiB objects)
	block_name_prefix: rbd_data.12c56b8b4567
	format: 2
	features: layering
	flags: 
	create_timestamp: Tue Sep  1 14:53:13 2020
           

建立pv

# vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  rbd:
    monitors:
      - 192.168.8.138:6789
      - 192.168.8.139:6789
      - 192.168.8.140:6789
    pool: mypool
    image: image1
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
  persistentVolumeReclaimPolicy: Retain

# kubectl apply -f pv.yaml
persistentvolume/ceph-pv created

# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
ceph-pv   1Gi        RWO,ROX        Retain           Available                                   76s
           

建立pvc

# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi

# kubectl apply -f pvc.yaml
           

當建立好claim之後,k8s會比對最合适的pv将其綁定到claim,持久卷的容量需要滿足claim的要求+卷的模式必須包含claim中指定的通路模式。故如上的pvc會綁定到我們剛建立的pv上。

檢視pvc的綁定:

# kubectl get pvc
NAME         STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph-claim   Bound    ceph-pv   1Gi        RWO,ROX                       13m
           
# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod
spec:
  volumes:
  - name: ceph-volume
    persistentVolumeClaim:
      claimName: ceph-claim
  containers:
  - name: ceph-busybox
    image: busybox
    command: ["/bin/sleep", "60000000000"]
    volumeMounts:
    - name: ceph-volume
      mountPath: /usr/share/busybox

# kubectl apply -f pod.yaml
# 寫入一個檔案
# kubectl exec -it ceph-pod -- "/bin/touch" "/usr/share/busybox/test.txt"
           

驗證

# 使用系統核心挂載
# rbd map mypool/image1
/dev/rbd1
# mkdir /mnt/ceph-rbd && mount /dev/rbd1 /mnt/ceph-rbd/ && ls /mnt/ceph-rbd/
           

動态持久卷

k8s使用stroageclass動态申請ceph存儲資源的時候,需要controller-manager使用rbd指令去和ceph叢集互動,而k8s的controller-manager使用的預設鏡像k8s.gcr.io/kube-controller-manager中沒有內建ceph的rbd用戶端。而k8s官方建議我們使用外部的provisioner來解決這個問題,這些獨立的外部程式遵循由k8s定義的規範。

我們來根據官方建議使用外部的rbd-provisioner來提供服務,如下操作再k8s的master上執行:

# git clone https://github.com/kubernetes-incubator/external-storage.git
# cd external-storage/ceph/rbd/deploy
# sed -r -i "s/namespace: [^ ]+/namespace: kube-system/g" ./rbac/clusterrolebinding.yaml ./rbac/rolebinding.yaml
# kubectl -n kube-system apply -f ./rbac

# kubectl get pods  -n kube-system  -l app=rbd-provisioner 
NAME                              READY   STATUS    RESTARTS   AGE
rbd-provisioner-c968dcb4b-fklhw   1/1     Running   0          7m16s
           

建立一個普通使用者來給k8s做rdb的映射

在ceph叢集中建立一個k8s專用的pool和使用者

# ceph osd pool create kube 60
# rbd  pool init kube
# ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
           

在k8s叢集建立kube使用者的secret:

# admin 主賬号的key 必須再kube-system下
# ceph auth get-key client.admin | base64
QVFDYjZFMWZkUEpNQkJBQWsxMmI1bE5keFV5M0NsVjFtSitYeEE9PQ==
# vim ceph-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
data:
  key: QVFDYjZFMWZkUEpNQkJBQWsxMmI1bE5keFV5M0NsVjFtSitYeEE9PQ==

# kubectl apply -f ceph-secret.yaml 
secret/ceph-secret created
# kubectl get secret ceph-secret -n kube-system
NAME          TYPE     DATA   AGE
ceph-secret   Opaque   1      33s

# 普通使用者secret運作在對應的namespace
# ceph auth get-key client.kube | base64
QVFCKytVMWZQT040SVJBQUFXS0JCNXZuTk94VFp1eU5UVVV0cHc9PQ==

# vim ceph-kube-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-kube-secret
data:
  key: QVFCKytVMWZQT040SVJBQUFXS0JCNXZuTk94VFp1eU5UVVV0cHc9PQ==
type:
  kubernetes.io/rbd

# kubectl apply -f ceph-kube-secret.yaml

# kubectl get secret ceph-kube-secret
NAME               TYPE                DATA   AGE
ceph-kube-secret   kubernetes.io/rbd   1      2m27s
           

修改storageclass的provisioner為我們新增加的provisioner:

# 編寫 storageClass 檔案
# vim sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-rbd
  annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: ceph.com/rbd
parameters:
  monitors: 192.168.8.138:6789,192.168.8.139:6789,192.168.8.140:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-kube-secret
  userSecretNamespace: default
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"


# kubectl apply -f sc.yaml 
storageclass.storage.k8s.io/ceph-rbd created
[[email protected] storage]# kubectl get storageclass
NAME                 PROVISIONER    RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-rbd (default)   ceph.com/rbd   Delete          Immediate           false                  10s
           

建立pvc

由于我們已經指定了預設的storageclass,故可以直接建立pvc。建立完成處于pending狀态,當使用的時候才會觸發provisioner建立:

# 建立pv
# vim pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim-sc
spec:
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi


# 建立前
kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
ceph-pv   1Gi        RWO,ROX        Retain           Bound    default/ceph-claim                           77m
# rbd ls kube
# kubectl apply -f pvc.yaml 
persistentvolumeclaim/ceph-claim-sc created
# 建立後
# rbd ls kube
# 這裡已經看到根據pvc自動生成了一個pv
# rbd ls kube
kubernetes-dynamic-pvc-10cbb787-ec2b-11ea-a4fe-7e5ddf4fa4e9
           

驗證:

# kubectl exec -it ceph-pod-sc -- "/bin/touch" "/usr/share/busybox/test-sc.txt"
# rbd map kube/kubernetes-dynamic-pvc-10cbb787-ec2b-11ea-a4fe-7e5ddf4fa4e9
/dev/rbd3
# mkdir /mnt/ceph-rbd-sc  && mount /dev/rbd3 /mnt/ceph-rbd-sc
# ls /mnt/ceph-rbd-sc/
lost+found  test-sc.txt
           

cephfs 靜态使用:

cephfs服務開啟:

# su -cephu
# cd /home/cephu/my-cluster
# ceph-deploy mds create node1
# sudo netstat -nltp | grep mds
tcp        0      0 0.0.0.0:6805            0.0.0.0:*               LISTEN      306115/ceph-mds
           

cephfs 存儲池建立:

# 建立 pool
sudo ceph osd pool create fs_data 32
sudo ceph osd pool create fs_metadata 24
# 建立 cephfs
sudo ceph fs new cephfs fs_metadata fs_data
# 檢視 cephfs 狀态
sudo ceph fs ls
sudo ceph mds stat
           

擴充:建立其使用者,新的fs空間

本地嘗試連接配接

# 安裝依賴包
yum -y install ceph-common
# 擷取 admin key
sudo ceph auth get-key client.admin
# 挂載
sudo mkdir  /mnt/cephfs
sudo mount -t ceph 192.168.8.138:6789:/ /mnt/cephfs -o name=admin,secret=AQCb6E1fdPJMBBAAk12b5lNdxUy3ClV1mJ+XxA==
           

volume 挂載

使用secret 認證
# ceph auth get-key client.admin | base64 
QVFDYjZFMWZkUEpNQkJBQWsxMmI1bE5keFV5M0NsVjFtSitYeEE9PQ==

# ceph-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFDYjZFMWZkUEpNQkJBQWsxMmI1bE5keFV5M0NsVjFtSitYeEE9PQ==

# vim pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: cephfs
spec:
  volumes:
  - name: cephfs
    cephfs:
      monitors:
      - 192.168.8.138:6789
      path: /
      secretRef:
        name: ceph-secret
      user: admin
  containers:
  - name: cephfs
    image: nginx
    volumeMounts:
    - name: cephfs
      mountPath: /cephfs
           

繼續閱讀