天天看点

Kubernetes中的PV、PVC、Configmap介绍

作者:技术怪圈

一、PV/PVC-简介

  • PV:PersistentVolume,是集群中已经由kubernetes管理员配置的一个网络存储,集群中的存储资源一个集群资源,即不隶属于任何namespace,PV的数据最终存储在硬件存储,pod不能直接挂PV,PV需要绑定给PVC并最终由pod挂载PVC使用,PV其支持NFS、Ceph、商业存储或云提供商的特定的存储等,可以自定义PV的类型是块还是文件存储、存储空间大小、访问模式等,PV的生命周期独立于Pod,即当使用PV的Pod被删除时可以对PV中的数据没有影响。
Kubernetes中的PV、PVC、Configmap介绍
  • PVC :PersistentVolumeClaim 用于实现pod和storage的解耦,这样我们修改storage的时候不需要修改pod。 与NFS的区别,可以在PV和PVC层面实现实现对存储服务器的空间分配、存储的访问权限管理等。 kubernetes 从1.0版本开始支持PersistentVolume和PersistentVolumeClaim。是pod对存储的请求, pod挂载PVC并将数据存储在PVC,而PVC需要绑定到PV才能使用,另外PVC在创建的时候要指定namespace,即pod要和PVC运行在同一个namespace,可以对PVC设置特定的空间大小和访问模式,使用PVC的pod在删除时也可以对PVC中的数据没有影响。

总结:

PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。

PVC是对PV资源的申请调用,pod通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储上。

Kubernetes中的PV、PVC、Configmap介绍

PersistentVolume参数:

官方提供的基于各后端存储创建的PV支持的访问模式:

Kubernetes中的PV、PVC、Configmap介绍
# kubectl explain PersistentVolume
Capacity: #当前PV空间大小,kubectl explain PersistentVolume.spec.capacity

accessModes :访问模式,#kubectl explain PersistentVolume.spec.accessModes
ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO
ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX
ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX

persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:

#kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
Retain – 删除PV后保持原装,最后需要管理员手动删除
Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath
Delete – 自动删除存储卷

volumeMode #卷类型,kubectl explain PersistentVolume.spec.volumeMode
定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统

mountOptions #附加的挂载选项列表,实现更精细的权限控制 ro #等           

PersistentVolumeClaim创建参数:

#kubectl explain PersistentVolumeClaim. 

accessModes :PVC 访问模式,#kubectl explain PersistentVolumeClaim.spec.volumeMode
ReadWriteOnce – PVC只能被单个节点以读写权限挂载,RWO
ReadOnlyMany – PVC以可以被多个节点挂载但是权限是只读的,ROX
ReadWriteMany – PVC可以被多个节点是读写方式挂载使用,RWX

resources: #定义PVC创建存储卷的空间大小

selector: #标签选择器,选择要绑定的PV
matchLabels #匹配标签名称
matchExpressions #基于正则表达式匹配

volumeName #要绑定的PV名称

volumeMode #卷类型
定义PVC使用的文件系统是块设备还是文件系统,默认为文件系统           

Volume-存储卷类型:

static:静态存储卷 ,需要在使用前手动创建PV、然后创建PVC并绑定到PV,然后挂载至pod使用,适用于PV和PVC相对比较固定的业务场景。

dynamin:动态存储卷,先创建一个存储类storageclass,后期pod在使用PVC的时候可以通过存储类动态创建PVC,适用于有状态服务集群如MySQL一主多从、zookeeper集群等。

Kubernetes中的PV、PVC、Configmap介绍

Volume-静态存储卷示例:

1、准备nfs

root@harbor:/data/k8sdata# showmount -e
Export list for harbor.host.com:
/data/k8sdata/linux66 *
           

2、创建PV/PVC

#PV
root@k8s-master:~/yaml/1221/case8-pv-static# cat 1-myapp-persistentvolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /data/k8sdata/linux66
    server: 172.31.7.252
root@k8s-master:~/yaml/1221/case8-pv-static# kubectl apply -f 1-myapp-persistentvolume.yaml
persistentvolume/myserver-myapp-static-pv created

#PVC
root@k8s-master:~/yaml/1221/case8-pv-static# cat 2-myapp-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver
spec:
  volumeName: myserver-myapp-static-pv  #绑定pv的名称
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi           

3、创建namespace myserver

root@k8s-master:~/yaml/1221/case8-pv-static# kubectl create ns myserver
namespace/myserver created
root@k8s-master:~/yaml/1221/case8-pv-static# kubectl apply -f 2-myapp-persistentvolumeclaim.yaml
persistentvolumeclaim/myserver-myapp-static-pvc created           

4、查看pv/pvc

root@k8s-master:~/yaml/1221/case8-pv-static# kubectl get pv
NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
myserver-myapp-static-pv   10Gi       RWO            Retain           Bound    myserver/myserver-myapp-static-pvc                           2m32s
root@k8s-master:~/yaml/1221/case8-pv-static# kubectl get pvc -n myserver
NAME                        STATUS   VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myserver-myapp-static-pvc   Bound    myserver-myapp-static-pv   10Gi       RWO                           93s
           

5、kubernetes中部署服务

root@k8s-master:~/yaml/1221/case8-pv-static# cat 3-myapp-webserver.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-static-pvc  #绑定pvc的名称

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend           

6、产生数据

#产生数据
root@k8s-master:~/yaml/1221/case8-pv-static# kubectl get pods -n myserver
NAME                                             READY   STATUS    RESTARTS   AGE
myserver-myapp-deployment-name-fb44b4447-n9r4r   1/1     Running   0          2m14s
myserver-myapp-deployment-name-fb44b4447-nsljw   1/1     Running   0          2m14s
myserver-myapp-deployment-name-fb44b4447-rsq5s   1/1     Running   0          2m14s
root@k8s-master:~/yaml/1221/case8-pv-static# kubectl get pods -n myserver
NAME                                             READY   STATUS    RESTARTS   AGE
myserver-myapp-deployment-name-fb44b4447-n9r4r   1/1     Running   0          2m16s
myserver-myapp-deployment-name-fb44b4447-nsljw   1/1     Running   0          2m16s
myserver-myapp-deployment-name-fb44b4447-rsq5s   1/1     Running   0          2m16s

root@k8s-master:~/yaml/1221/case8-pv-static# kubectl exec -it -n myserver myserver-myapp-deployment-name-fb44b4447-n9r4r bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@myserver-myapp-deployment-name-fb44b4447-n9r4r:/# apt update -y && apt install wget -y

root@myserver-myapp-deployment-name-fb44b4447-n9r4r:/# wget https://tenfei04.cfp.cn/creative/vcg/800/new/VCG21gic15081740-NEQ.jpg
           

7、存储验证数据:

root@harbor:/data/k8sdata/linux66# ls
1.jpg           

8、浏览器验证

Kubernetes中的PV、PVC、Configmap介绍

Volume-动态存储卷示例:

1.创建账户:

root@k8s-master:~/yaml/1221/case9-pv-dynamic-nfs# cat 1-rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io           

2.创建storageclass

root@k8s-master:~/yaml/1221/case9-pv-dynamic-nfs# cat 2-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  - noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据           

3.创建NFS provisioner

root@k8s-master:~/yaml/1221/case9-pv-dynamic-nfs# cat 3-nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: harbor.host.com/k8s/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.7.252
            - name: NFS_PATH
              value: /data/k8sdata/linux66
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.31.7.252
            path: /data/k8sdata/linux66           

4.创建PVC:

root@k8s-master:~/yaml/1221/case9-pv-dynamic-nfs# cat 4-create-pvc.yaml
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小           

5.创建web服务:

root@k8s-master:~/yaml/1221/case9-pv-dynamic-nfs# cat 5-myapp-webserver.yaml
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend

root@myserver-myapp-deployment-name-7c855dc86d-2cg54:/# cd /usr/share/nginx/html/statics
root@myserver-myapp-deployment-name-7c855dc86d-2cg54:/usr/share/nginx/html/statics# echo statics_web_page > index.html

#pod里验证

root@myserver-myapp-deployment-name-7c855dc86d-2cg54:/usr/share/nginx/html/statics# df -Th
Filesystem                                                                                                      Type     Size  Used Avail Use% Mounted on
overlay                                                                                                         overlay   24G   13G  9.6G  58% /
tmpfs                                                                                                           tmpfs     64M     0   64M   0% /dev
tmpfs                                                                                                           tmpfs    945M     0  945M   0% /sys/fs/cgroup
/dev/mapper/ubuntu--vg-ubuntu--lv                                                                               ext4      24G   13G  9.6G  58% /etc/hosts
shm                                                                                                             tmpfs     64M     0   64M   0% /dev/shm
172.31.7.252:/data/k8sdata/linux66/myserver-myserver-myapp-dynamic-pvc-pvc-aeac6023-c2b7-444f-bc23-e2ab34e3bf82 nfs4      24G   13G  9.8G  57% /usr/share/nginx/html/statics
tmpfs                                                                                                           tmpfs    1.6G   12K  1.6G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                                                                                           tmpfs    945M     0  945M   0% /proc/acpi
tmpfs                                                                                                           tmpfs    945M     0  945M   0% /proc/scsi
tmpfs                                                                                                           tmpfs    945M     0  945M   0% /sys/firmware                                                                                                       tmpfs    945M     0  945M   0% /sys/firmware

#NFS验证
root@harbor:~# cat /data/k8sdata/linux66/myserver-myserver-myapp-dynamic-pvc-pvc-aeac6023-c2b7-444f-bc23-e2ab34e3bf82/index.html
statics_web_page           

二、Configmap:

Configmap配置信息和镜像解耦, 实现方式为将配置信息放到configmap对象中,然后在pod的中作为Volume挂载到pod中,从而实现导入配置的目的。

使用场景:

  • 通过Configmap给pod定义全局环境变量
  • 通过Configmap给pod传递命令行参数,如mysql -u -p中的账户名密码可以通过Configmap传递。
  • 通过Configmap给pod中的容器服务提供配置文件,配置文件以挂载到容器的形式使用。
Kubernetes中的PV、PVC、Configmap介绍

注意事项:

  • Configmap需要在pod使用它之前创建。
  • pod只能使用位于同一个namespace的Configmap,及Configmap不能跨namespace使用。
  • 通常用于非安全加密的配置场景。
  • Configmap通常是小于1MB的配置。

Configmap示例:

root@k8s-master:~/yaml/1221/case10-configmap# cat 1-deploy_configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
 default: |
    server {
       listen       80;
       server_name  www.mysite.com;
       index        index.html index.php index.htm;

       location / {
           root /data/nginx/html;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }

---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.0
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginx-config
          mountPath:  /etc/nginx/conf.d/mysite
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default
               path: mysite.conf

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80


#pods里验证
root@nginx-deployment-779cb4cbd8-86jxf:/# cat /etc/nginx/conf.d/mysite/mysite.conf
server {
   listen       80;
   server_name  www.mysite.com;
   index        index.html index.php index.htm;

   location / {
       root /data/nginx/html;
       if (!-e $request_filename) {
           rewrite ^/(.*) /index.html last;
       }
   }
}           

继续阅读