天天看點

k8s的部署

(一)kubeadm安裝:

(1)部署方式介紹:

①yum安裝:

1、優點:安裝,配置很簡單,适合新手學習。

2、缺點:版本較低,目前僅支援K8S 1.5.2版本,很多功能不支援

②kind安裝:

1、kind讓你能夠在本地計算機上運作Kubernetes kind要求你安裝并配置好Docker

2、推薦閱讀:https://kind.sigs.k8s.io/docs/user/quick-start/

③minikube部署:

1、minikube是一個工具, 能讓你在本地運作Kubernetes

2、minikube在你本地的個人計算機(包括 Windows、macOS 和 Linux PC)運作一個單節點的Kubernetes叢集,以便你來嘗試 Kubernetes 或者開展每天的開發工作是以很适合開發人員體驗K8S

3、推薦閱讀:https://minikube.sigs.k8s.io/docs/start/

④kubeadm:

1、可以使用kubeadm工具來建立和管理Kubernetes叢集,适合在生産環境部署

2、該工具能夠執行必要的動作并用一種使用者友好的方式啟動一個可用的、安全的叢集

3、推薦閱讀:

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

4、kubectl使得你可以對Kubernetes叢集運作指令。 你可以使用kubectl來部署應用、監測和管理叢集資源以及檢視日志

5、推薦閱讀:https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands​

6、大規模部署閱讀:

​​https://kubernetes​​.io/zh/docs/setup/best-practices/cluster-large/

⑤二進制部署:

1、安裝步驟比較繁瑣,但可以更加了解細節适合運維人員生産環境中使用

⑥源碼編譯安裝:

1、難度最大,請做好各種故障排查的心理準備其實這樣一點對于K8S二次開發的人員應該不是很難

(2)kubectl安裝部署:

①準備:{各個節點都需執行}

1、文檔閱讀:

​​https://kubernetes​​.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

2、每台機器2g或更多的記憶體

3、cpu2核心及以上

4、叢集中的所有機器的網絡彼此均能互相連接配接公網和内網都可以

②安裝前的檢查:{每個節點都要執行}

1、禁用交換分區:

[root@k8s_1 ~]# swapoff -a && sysctl -w vm.swappiness=0

[root@k8s_1 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

[root@k8s_1 ~]#

2、節點之中不可以有重複的主機名、MAC 位址或product_uuid

[root@k8s_1 ~]# ifconfig eth0 | grep ether | awk '{print $2}'

[root@k8s_1 ~]# cat /sys/class/dmi/id/product_uuid

564DAB00-4845-F4D0-01E1-87D371046673

[root@k8s_1 ~]#

3、各個節點互相ping通:

4、配置host解析:

[root@k8s_1 ~]# cat >> /etc/hosts <<'EOF'

10.1.1.41 k8s41.itter.com

10.1.1.42 k8s42.itter.com

10.1.1.43 k8s43.itter.com

10.1.1.44 k8s44.itter.com

EOF

cat /etc/hosts

5、允許iptables檢查橋接流量:

[root@k8s_1 ~]#cat <<EOF | tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

6、設定核心轉發:

[root@k8s41 ~]# sysctl -w net.ipv4.ip_forward=1

[root@k8s41 ~]# cat /etc/sysctl.conf

net.ipv4.ip_forward = 1

[root@k8s41 ~]# sysctl -p

7、檢查端口是否被占用:{https://kubernetes.io/zh/docs/reference/ports-and-protocols/}

TCP 入站 6443 Kubernetes API server 所有

TCP 入站 2379-2380 etcd server client API kube-apiserver, etcd

TCP 入站 10250 Kubelet API 自身, 控制面

TCP 入站 10259 kube-scheduler 自身

TCP 入站 10257 kube-controller-manager 自身

TCP 入站 10250 Kubelet API 自身, 控制面

TCP 入站 30000-32767 NodePort Services† 所有

③配置docker:{每個節點都要執行}

1、閱讀連結:

​​https://github​​.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#unchanged

2、配置源:

[root@k8s_1 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@k8s_1 ~]# curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

[root@k8s_1 ~]# sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

[root@k8s_1 ~]#

3、安裝docker:

[root@k8s_1 ~]# yum -y install docker-ce-18.09.9 docker-ce-cli-18.09.9

yum -y install bash-completion

source /usr/share/bash-completion/bash_completion

4、配置docker的優化:

mkdir -pv /etc/docker && cat <<EOF | sudo tee /etc/docker/daemon.json

{

"insecure-registries": ["k8s41.itter.com:5000"],

"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com"],

"exec-opts": ["native.cgroupdriver=systemd"]

}

EOF

5、配置開機啟動:

[root@k8s41 ~]# systemctl enable --now docker && systemctl status docker

6、禁用防火牆:

[root@k8s41 ~]# systemctl disable --now firewalld

[root@k8s41 ~]#

7、禁用selinux:

[root@k8s41 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

[root@k8s41 ~]# grep ^SELINUX= /etc/selinux/config

SELINUX=disabled

[root@k8s41 ~]#

8、在41節點安裝鏡像倉庫:

[root@k8s41 ~]# docker run -dp 5000:5000 --restart always --name test-registry registry:2

④安裝k8s:

1、閱讀:

​​https://kubernetes​​.io/zh/docs/tasks/tools/install-kubectl-linux/

2、安裝軟體源:

[root@k8s41 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

repo_gpgcheck=0

EOF

3、檢視版本:

[root@k8s41 ~]# yum -y list kubeadm --showduplicates | sort -r

4、安裝:{每一個k8s節點都要安裝}

[root@k8s41 ~]# yum -y install kubeadm-1.15.12-0 kubelet-1.15.12-0 kubectl-1.15.12-0

5、啟動:{啟動kubelet服務,啟動失敗時正常現象,其會自動重新開機,因為缺失配置檔案,初始化叢集後恢複}

[root@k8s41 ~]# systemctl enable --now kubelet && systemctl status kubelet

6、使用kubeadm初始化master節點:{隻在41節點也就是master上執行}

[root@k8s41 ~]# kubeadm init --kubernetes-version=v1.15.12 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16

--kubernetes-version:指定K8S master元件的版本号

--image-repository:指定下載下傳k8s master元件的鏡像倉庫位址

--pod-network-cidr:指定Pod的網段位址

--service-cidr:指定SVC的網段

7、如果以上執行成功會看到success的提示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

​​ https://kubernetes​​.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s41 ~]#

8、建立檔案和目錄:

[root@k8s41 ~]#mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

9、檢視叢集節點:

[root@k8s41 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s41.itter.com NotReady master 4m28s v1.15.12

[root@k8s41 ~]#

10、檢視api server:{如果能看到如下資訊,則說明master元件部署完成}

[root@k8s41 ~]# cat $HOME/.kube/config

apiVersion: v1

clusters:

- cluster:

certificate-authority-data:

server: https://10.1.1.41:6443

name: kubernetes

[root@k8s41 ~]# kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com NotReady master 9m27s v1.15.12

[root@k8s41 ~]#

11、其他節點加入master:{下面的這些都是在master初始化成功後顯示的}

[root@k8s42 ~]# kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s43 ~]#kubeadm join 10.1.1.41:6443 --token n8ixpk.d5t3sg5gbidc2enl \

--discovery-token-ca-cert-hash sha256:3012aada56c007e8547333c60801603b735d381fdb30aa1f6d365811077590ec

[root@k8s44 ~]# 44節點暫時不加入

12、檢視加入的情況:

[root@k8s41 ~]# kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com NotReady master 29m v1.15.12

node/k8s42.itter.com NotReady <none> 3m55s v1.15.12

node/k8s43.itter.com NotReady <none> 2m33s v1.15.12

[root@k8s41 ~]#

[root@k8s41 ~]#

13、k8s指令補全:

[root@k8s41 flannel]# echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc

⑤安裝k8s的網絡:

1、先檢視:{純淨的}

[root@k8s41 ~]# kubectl get pods -A

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-94d74667-2qhcw 0/1 Pending 0 21m

kube-system coredns-94d74667-sx2fq 0/1 Pending 0 21m

kube-system etcd-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-apiserver-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-controller-manager-k8s41.itter.com 1/1 Running 0 20m

kube-system kube-proxy-9w2nz 1/1 Running 0 21m

kube-system kube-proxy-g6pss 1/1 Running 0 8m59s

kube-system kube-proxy-g7g7k 1/1 Running 0 61s

kube-system kube-scheduler-k8s41.itter.com 1/1 Running 0 20m

[root@k8s41 ~]#

2、官方提供:{好像不行,暫時不要執行這步}

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-legacy.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

3、有效連結:{可以把yml檔案下載下傳下來,然後本地執行,以下方式任選1即可}

1.下載下傳,在執行:{或者線上執行任選其一}

[root@k8s41 flannel]# pwd

/root/flannel

[root@k8s41 flannel]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

2.下載下傳完執行:{這個執行完,可能還需要在執行一個我找的那個}

[root@k8s41 flannel]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

[root@k8s41 flannel]# ls -l

drwxr-xr-x 2 root root 90 Oct 6 07:43 bad

-rw-r--r-- 1 root root 10599 Oct 6 19:14 kube-flannel.yml

[root@k8s41 flannel]# kubectl apply -f kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds-amd64 created

daemonset.extensions/kube-flannel-ds-arm64 created

daemonset.extensions/kube-flannel-ds-arm created

daemonset.extensions/kube-flannel-ds-ppc64le created

daemonset.extensions/kube-flannel-ds-s390x created

[root@k8s41 flannel]#

4、是否建立了flannel元件

[root@k8s41 flannel]# kubectl get pods -A

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-94d74667-2qhcw 0/1 ContainerCreating 0 24m

kube-system coredns-94d74667-sx2fq 0/1 ContainerCreating 0 24m

kube-system etcd-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-apiserver-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-controller-manager-k8s41.itter.com 1/1 Running 0 23m

kube-system kube-flannel-ds-amd64-2dwvf 1/1 Running 0 73s

kube-system kube-flannel-ds-amd64-pq89n 1/1 Running 0 73s

kube-system kube-flannel-ds-amd64-qg27w 1/1 Running 0 73s

kube-system kube-proxy-9w2nz 1/1 Running 0 24m

kube-system kube-proxy-g6pss 1/1 Running 0 11m

kube-system kube-proxy-g7g7k 1/1 Running 0 4m

kube-system kube-scheduler-k8s41.itter.com 1/1 Running 0 23m

[root@k8s41 flannel]#

[root@k8s41 flannel]#

[root@k8s41 flannel]#

[root@k8s41 flannel]#

5、驗證:

[root@k8s41 flannel]#kubectl get cs,no

NAME STATUS MESSAGE ERROR

componentstatus/scheduler Healthy ok

componentstatus/controller-manager Healthy ok

componentstatus/etcd-0 Healthy {"health":"true"}

NAME STATUS ROLES AGE VERSION

node/k8s41.itter.com Ready master 31m v1.15.12

node/k8s42.itter.com Ready <none> 18m v1.15.12

node/k8s43.itter.com Ready <none> 10m v1.15.12

[root@k8s41 flannel]#

[root@k8s41 flannel]#kubectl get pods -A -o wide|grep flannel

kube-system kube-flannel-ds-amd64-2dwvf 1/1 Running 0 7m35s 10.1.1.42 k8s42.itter.com <none> <none>

kube-system kube-flannel-ds-amd64-pq89n 1/1 Running 0 7m35s 10.1.1.41 k8s41.itter.com <none> <none>

kube-system kube-flannel-ds-amd64-qg27w 1/1 Running 0 7m35s 10.1.1.43 k8s43.itter.com <none> <none>

[root@k8s41 flannel]#

⑥元件的處理問題:

1、檢視節點情況:

[root@localhost ~]# kubectl get pods -A

[root@localhost ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s51.itter.com Ready master 3m36s v1.15.12

k8s52.itter.com Ready <none> 2m50s v1.15.12

k8s53.itter.com Ready <none> 2m48s v1.15.12

[root@localhost ~]#

[root@localhost ~]# kubectl get pods -o wide

2、删除flannel元件:

[root@k8s51 ~]# kubectl delete -f kube-flannel.yml

[root@k8s51 ~]# kubectl delete -f .

[root@k8s51 ~]# kubectl delete -f​​ https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml​​

[root@k8s41 flanner]# kubectl delete pod/kube-flannel-ds-6cp9h -n kube-system --grace-period=0 --force

3、如果加入叢集的指令丢失了:{重新生成token}

[root@k8s41 ~]# kubeadm token create --print-join-command

kubeadm join 10.1.1.41:6443 --token h3es5v.4r3f1drdg48j5knd --discovery-token-ca-cert-hash sha256:71adc9bab2c293db9e8f89c66f8b01abb78d10f284011569cdf0f001f01450ad

[root@k8s41 ~]#

4、檢視日志:

[root@k8s41 ~]# journalctl -f -u kubelet.service

[root@k8s41 ~]# kubectl describe

k8s

繼續閱讀