天天看點

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

  • ​​1 簡介​​
  • ​​2 搭建叢集​​
  • ​​2.1 安裝基礎軟體​​
  • ​​2.2 設定常見功能​​
  • ​​2.3 啟動叢集​​
  • ​​3 測試​​
  • ​​4 說明​​

最近由于工作需要開始研究k8s,看了好幾個基礎教程,也搭建了好幾次叢集;多次想着寫一篇簡單的易懂的教程(小白可上手),一方面以便于自己後續查閱,另一方面給有需要的人員提供一個參考案例;由于各種原因沒起筆,恰逢周六晚稍空閑了些,從11點開始搭建叢集,然後測試落筆,調整不合理的地方,終于完成了(已經淩晨4點了),再次體驗到這種如釋重負的感覺!!

1 簡介

  1. 節點說明

    本次搭建的共4個節點,1master+3node

角色 ip cpu 記憶體
master 192.168.2.131 2核 3Gi
node01 192.168.2.132 1核 2Gi
node02 192.168.2.133 1核 2Gi
node03 192.168.2.134 1核 2Gi
  1. 部署目标
  • 在所有節點上安裝Docker和kubeadm
  • 部署Kubernetes master
  • 部署容器網絡插件
  • 部署 Kubernetes Node,将節點加入Kubernetes叢集中
  • 部署Dashboard Web頁面,可視化檢視Kubernetes資源
  • 安裝metrics-server,監控節點|pod cpu和記憶體資源
  • 安裝lens,通過普羅米修斯采集相關資料,并通過lens展示
  1. 資源說明

    此處提供了一個幾乎基于最新kubeadm搭建k8s的完整步驟,以便于有需要的人員學習!

    除此之外,也對此次搭建的 系統、鏡像、元件配置yaml 檔案等所有檔案打包,上傳到百度雲盤,網絡不太好的情況下可以直接下載下傳安裝包,然後跳過docker 和 kubeadm等軟體的安裝,直接從 2.3小節開始啟動叢集!

    資源中虛拟機器網絡調整方法可以參考筆者博文:​​Windows小技巧8–VMware workstation虛拟機網絡通信​​,目前鏡像中為nat模式固定IP!

    資源連結:​​kubeadm 配套資源​​​​https://pan.baidu.com/s/1_JGnMv83yO6mDXXmO9Y3ng​​ 提取碼: a3hd 使用者: xg 密碼: 111111

資源名稱 說明
k8s-Userver.7z ubuntu 1604虛拟機鏡像,kubelet kubeadm kubectl docker
k8s-images.7z 1.19.4 版本啟動的所有鏡像,也包括了幾個測試的鏡像,例如nginx stress busybox等
k8s-yaml.7z 包括網絡元件 flannel、dashboard、metrics-server 的yaml檔案
k8s-Userver.7z
k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

k8s-images.7z

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

k8s-yaml.7z

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

2 搭建叢集

2.1 安裝基礎軟體

基礎軟體包括 docker、kubelet、kubeadm 和 kubectl; 以下操作都在root權限下進行的,是以不需要sudo。

  1. 更新源安裝基礎軟體
apt-get update && sudo apt-get install -y apt-transport-https curl      

此處建議将系統源更換為清華源,以防部分資源下載下傳過慢;

具體參考: 清華ubuntu源 ​​​https://mirror.tuna.tsinghua.edu.cn/help/ubuntu/​​

  1. 安裝docker
apt-get -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs)
apt-get update
apt-get install      

docker 版本最好不要和kubeadm的版本差距太大,即盡量不要早期版本docker配較新的kubeadm,也不要最新docker配置早期的kubeadm;筆者這裡為最新docker 和 kubeadm 1.19.4(機會最新kubeadm),無沖突。

docker 安裝和使用的常見問題,可以參考筆者博文: ​​​docker筆記7–Docker常見操作​​

  1. 安裝kubeadm相關軟體
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
apt-get update
apt install -y kubelet=1.19.4-00 kubectl=1.19.4-00 kubeadm=1.19.4-00 --allow-unauthenticated
若提示如下錯誤:
GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB NO_PUBKEY 8B57C5C2836F4BEB
解決方法:
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 6A030B21BA07F4FB(更換為實際的PUBKEY即可)      
  1. 鎖定軟體版本
apt-mark hold kubelet kubeadm kubectl docker      
  1. 重新開機kubelet
systemctl daemon-reload
systemctl restart kubelet      

2.2 設定常見功能

  1. 關閉swap
臨時關閉:
swapoff -a
持久關閉:
vim  /etc/fstab
# UUID=e9a6ffe0-5f53-4e23-99ab-3fedfb3399c1 none            swap    sw              0       0      
  1. 關閉防火牆
ufw disable      
  1. 設定網絡參數
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system      
  1. 加載鏡像

    為了能快速下載下傳鏡像,筆者已經将鏡像等資源打包傳到百度網盤了,可以直接下載下傳下來,解壓後進入images檔案夾,執行下面指令批量加載鏡像;

for i in $(ls);do docker load -i $i; done
如果需要備份所有的鏡像,可以通過如下指令進行:
docker images|tail -n +2|awk '{print $1":"$2, $1"-"$2".tar.gz"}'|awk '{print $1, gsub("/","-") $var}'|awk '{print "docker save -o " $3,$1}' > save_img.sh && bash      
  1. 設定hosts
192.168.2.131 kmaster
192.168.2.132 knode01
192.168.2.133 knode02
192.168.2.134 knode03      

注意:若使用虛拟機部署k8s,以上1-6操作完成後,可以直接clone一份虛拟機,然後更改不同機器的網絡配置、hostname即可。

2.3 啟動叢集

  1. master 叢集啟動 kubeadm
kubeadm init   --apiserver-advertise-address=192.168.2.131   --image-repository registry.aliyuncs.com/google_containers   --service-cidr=10.1.0.0/16   --pod-network-cidr=10.244.0.0/16      
  1. 以下為正常輸出資訊:
I1220 01:16:58.210095    1831 version.go:252] remote version is much newer: v1.20.1; falling back to: stable-1.19
W1220 01:16:58.878208    1831 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.6
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [WARNING Hostname]: hostname "kmaster" could not be reached
    [WARNING Hostname]: hostname "kmaster": lookup kmaster on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.2.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.2.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.2.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 34.505441 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xxpowp.b8zoas29foe15zuz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.131:6443 --token xxpowp.b8zoas29foe15zuz \      
  1. master 部署cni網絡
kubectl apply -f kube-flannel.yml
輸出:
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created      
  1. node 節點加入叢集

    在node01-03 上分别執行 join 指令

kubeadm join 192.168.2.131:6443 --token xxpowp.b8zoas29foe15zuz \
    --discovery-token-ca-cert-hash sha256:9025b306232c82bb8f5a572d0453247d6db95e5c70dea1e90c63a5e8b8309af5 
輸出:
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join      
  1. 忘記token和discovery-token-ca-cert-hash時,重新加入叢集方法

    參考文檔​​​Reference->Setup tools reference->Kubeadm->kubeadm join​​

4.1 master 節點執行
# kubeadm token create
輸出:
W0122 22:39:46.556169   22712 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
6jvgb0.7g0mdms1xllhr2pc

# openssl x509 -pubkey \
-in /etc/kubernetes/pki/ca.crt | openssl rsa \
-pubin -outform der 2>/dev/null | openssl dgst \
-sha256 -hex | sed 's/ˆ.* //'
輸出:
(stdin)= d07b6263d56d9329fc9a313b0c64ddc83c1eb828eba350ad4b76d9fbd76a1e89

4.2 Worker 節點執行:
# kubeadm join --token 6jvgb0.7g0mdms1xllhr2pc 10.120.75.102:6443 --discovery-token-ca-cert-hash sha256:d07b6263d56d9329fc9a313b0c64ddc83c1eb828eba350ad4b76d9fbd76a1e89      
  1. 下線節點
5.1 master節點設定不可排程,驅逐pod
設定不可排程
# kubectl cordon gdcintern-test01-mj.i.nease.net
驅逐節點上的pod
# kubectl drain gdcintern-test01-mj.i.nease.net --ignore-daemonsets
确認節點上是否有pod運作,沒有就可以删除了
# kubectl get pod -o wide|grep gdcintern-test01-mj.i.nease.net
删除節點
# kubectl delete node gdcintern-test01-mj.i.nease.net

5.2 worker 節點執行reset
kubeadm reset, 然後按照提示删除  stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
rm -fr /etc/cni/net.d
注: 預設情況下,如果隻在master節點上執行delete node,過一段時間worker節點上kubelet自動拉起服務後,會繼續加入叢集,是以需要在worker節點上執行以下reset操作

5.3      

3 測試

  1. 部署nginx 并暴露80 端口
kubectl create deployment nginx --image=nginx:1.19.6
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc      

發現端口為:80:30806/TCP, 使用nodeIp:30806 通路,如下圖可以正常通路:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4
  1. 安裝 dashboard
wget      

注意此處需要對 service kubernetes-dashboard 的 ports 屬性中添加 nodePort: 30001(也可以按需更新其它它端口) 和 type: NodePort,如下圖:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

對面闆生成證書

1) root@kmaster:~# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
2) root@kmaster:~# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
3) root@kmaster:~# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-6hl77
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 3e37a5cd-8f54-4cde-8d6d-cefbc7d92516

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:         eyJhbGciOiJSUzI1NiIsImtpZCI6IktpaWNYeW5DSVRLdWx6YmpoUFJsdHVXTzRQV0NGQnlKV1dmN29Xd21zX1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNmhsNzciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2UzN2E1Y2QtOGY1NC00Y2RlLThkNmQtY2VmYmM3ZDkyNTE2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.fTGH-2oHqGcc4yOcjEUgco4aDPF5OyojQWzVt2AnvQLiOWynFtaxIjWMXNqMcfH4fpTE7sT1PrDECFG2iV4J6ZIhtQUMfDD5YqjPSLU_w1qr528HcDFRtNbS6ik-OA-KjmfbNU6bdQ4QEYNPsXC40TBj1kpr9nr-ZZQIuQhD7zXQ5AEQR3S6A9B0TPwl8v1wRn86ge7YD2YZ76JY-knntlnd5wgsbfYpAeQECxZ6uOcN-mJYOWB11WtGmfVCtWC4-N63SlWyvcXEfzl8h5wnxI8yTGdH-LoEjHMx-B9-_yS0yRfZLPDowND9BgoqQvJF7lqyC1PR7M25Z20s2h7Log      

通過網址以下網址檢視dashboard,由于該 https 未認證,大部分浏覽器不能直接檢視,是以需要繼續生成p12 證書,導入證書後再通過上面生成的token檢視;

​​​https://192.168.2.131:30001​​​ (https://nodeIp:NodePort)

​​​https://192.168.2.131:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default​​(通過api server 的6443端口的proxy來通路)

1)生成  kubecfg.crt
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
2)生成 kubecfg.key
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
3)kubecfg.key kubecfg.p12 
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"      

在浏覽器中導入該證書,google浏覽器->settings->privacy and security->Manage certificates->Your certificates->Import ,導入剛剛生成的 kubecfg.key kubecfg.p12 檔案,重新開機下浏覽器輸入dashboard網址即可。

導入p12證書檔案:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

正常情況下添加認證後,可以通過 Proceed to 192.168.2.131(unsafe) 通路dashbaord(mac 最新系統設定方法不太一樣,so,此處隻針對linux 和 windows的chrome),如下圖:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

檢視節點資訊:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

到這裡,終于可以通過浏覽器來通路dashboard了;當然,也可以通過相關設定,開放基于http 的dashboard端口,那樣就不會這樣繞一個大圈子了;

後續筆者測試下 http 端口dashboard 調整方式後,也會把相關步驟補充在此處!

  1. 安裝 metrics-server
wget      

此時,各個pod已經正常拉起來了, pod 資訊如下圖:

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4

metrics-server 正常工作後,可以通過 kubectl top nodes 來檢視節點cpu和記憶體使用資訊(若未正常安裝,則執行該操作會報錯):

k8s筆記6--使用kubeadm快速部署k8s叢集 v1.19.4
  1. 使用 lens 插件安裝普羅米修斯監控

    拷貝config檔案到主機,通過lens導入,并設定Prometheus資訊,具體操作參考筆者博文:​​​k8s筆記3–Kubernetes IDE Lens​​ master 節點監控資訊:

  2. 各個node節點cpu|memory|disk資訊:

4 說明

  1. 參考文檔

    1​​​production-environment/tools/kubeadm/install-kubeadm​​​ 2 ​​使用kubeadm快速部署一個Kubernetes叢集(v1.18)​​ 3 ​​kubeadm 配套資源​​ ​​https://pan.baidu.com/s/1_JGnMv83yO6mDXXmO9Y3ng​​ 提取碼: a3hd 使用者: xg 密碼: 111111

  2. 軟體說明

    docker 版本為:Docker version 20.10.1;

    k8s 叢集版本為:v1.19.4;

    測試系統為ubuntu 16.04 server版本;

    測試 VMwarestation 為16.0.0 Pro,vm 版本太高可能不支援(最基礎的 Userver.vmdk 鏡像是筆者18年用 vm12.5 生成的);