環境介紹:
- CentOS: 7.6
- Docker: 18.06.1-ce
- Kubernetes: 1.13.4
- Kuberadm: 1.13.4
- Kuberlet: 1.13.4
- Kuberctl: 1.13.4
部署介紹:
建立高可用首先先有一個 Master 節點,然後再讓其他伺服器加入組成三個 Master 節點高可用,然後再講工作節點 Node 加入。下面将描述每個節點要執行的步驟:
- Master01: 二、三、四、五、六、七、八、九、十一
- Master02、Master03: 二、三、五、六、四、九
- node01、node02: 二、五、六、九
叢集架構:
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1002.jpg)
一、kuberadm 簡介
Kuberadm 作用
Kubeadm 是一個工具,它提供了 kubeadm init 以及 kubeadm join 這兩個指令作為快速建立 kubernetes 叢集的最佳實踐。
kubeadm 通過執行必要的操作來啟動和運作一個最小可用的叢集。它被故意設計為隻關心啟動叢集,而不是之前的節點準備工作。同樣的,諸如安裝各種各樣值得擁有的插件,例如 Kubernetes Dashboard、監控解決方案以及特定雲提供商的插件,這些都不在它負責的範圍。
相反,我們期望由一個基于 kubeadm 從更高層設計的更加合适的工具來做這些事情;并且,理想情況下,使用 kubeadm 作為所有部署的基礎将會使得建立一個符合期望的叢集變得容易。
Kuberadm 功能
- kubeadm init: 啟動一個 Kubernetes 主節點
- kubeadm join: 啟動一個 Kubernetes 工作節點并且将其加入到叢集
- kubeadm upgrade: 更新一個 Kubernetes 叢集到新版本
- kubeadm config: 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化叢集,您需要對叢集做一些配置以便使用 kubeadm upgrade 指令
- kubeadm token: 管理 kubeadm join 使用的令牌
- kubeadm reset: 還原 kubeadm init 或者 kubeadm join 對主機所做的任何更改
- kubeadm version: 列印 kubeadm 版本
- kubeadm alpha: 預覽一組可用的新功能以便從社群搜集回報
功能版本
Area | Maturity Level |
---|---|
Command line UX | GA |
Implementation | |
Config file API | beta |
CoreDNS | |
kubeadm alpha subcommands | alpha |
High availability | |
DynamicKubeletConfig | |
Self-hosting |
二、前期準備
1、虛拟機配置設定說明
位址 | 主機名 | 記憶體&CPU | 角色 |
---|---|---|---|
192.168.2.10 | — | vip | |
192.168.2.11 | k8s-master-01 | 2C & 2G | master |
192.168.2.12 | k8s-master-02 | ||
192.168.2.13 | k8s-master-03 | ||
192.168.2.21 | k8s-node-01 | 2c & 4G | node |
192.168.2.22 | k8s-node-02 |
2、各個節點端口占用
- Master 節點
規則 | 方向 | 端口範圍 | 作用 | 使用者 |
---|---|---|---|---|
TCP | Inbound | 6443* | Kubernetes API | server All |
2379-2380 | etcd server | client API kube-apiserver, etcd | ||
10250 | Kubelet API | Self, Control plane | ||
10251 | kube-scheduler | Self | ||
10252 | kube-controller-manager | Sel |
- node 節點
30000-32767 | NodePort Services** | All |
3、基礎環境設定
Kubernetes 需要一定的環境來保證正常運作,如各個節點時間同步,主機名稱解析,關閉防火牆等等。
主機名稱解析
分布式系統環境中的多主機通信通常基于主機名稱進行,這在 IP 位址存在變化的可能 性時為主機提供了固定的通路人口,是以一般需要有專用的 DNS 服務負責解決各節點主機 不過,考慮到此處部署的是測試叢集,是以為了降低系複雜度,這裡将基于 hosts 的檔案進行主機名稱解析。
修改hosts
分别進入不同伺服器,進入 /etc/hosts 進行編輯
vim /etc/hosts
加入下面内容:
192.168.2.10 master.k8s.io k8s-vip
192.168.2.11 master01.k8s.io k8s-master-01
192.168.2.12 master02.k8s.io k8s-master-02
192.168.2.13 master03.k8s.io k8s-master-03
192.168.2.21 node01.k8s.io k8s-node-01
192.168.2.22 node02.k8s.io k8s-node-02
修改hostname
分别進入不同的伺服器修改 hostname 名稱
# 修改 192.168.2.11 伺服器
hostnamectl set-hostname k8s-master-01
# 修改 192.168.2.12 伺服器
hostnamectl set-hostname k8s-master-02
# 修改 192.168.2.13 伺服器
hostnamectl set-hostname k8s-master-03
# 修改 192.168.2.21 伺服器
hostnamectl set-hostname k8s-node-01
# 修改 192.168.2.22 伺服器
hostnamectl set-hostname k8s-node-02
主機時間同步
将各個伺服器的時間同步,并設定開機啟動同步時間服務
systemctl start chronyd.service
systemctl enable chronyd.service
關閉防火牆服務
停止并禁用防火牆
systemctl stop firewalld
systemctl disable firewalld
關閉并禁用SELinux
# 若目前啟用了 SELinux 則需要臨時設定其目前狀态為 permissive
setenforce 0
# 編輯/etc/sysconfig selinux 檔案,以徹底禁用 SELinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# 檢視selinux狀态
getenforce
如果為permissive,則執行reboot重新啟動即可
禁用 Swap 裝置
kubeadm 預設會預先檢目前主機是否禁用了 Swap 裝置,并在未用時強制止部署 過程是以,在主機記憶體資驚充裕的條件下,需要禁用所有的 Swap 裝置
# 關閉目前已啟用的所有 Swap 裝置
swapoff -a && sysctl -w vm.swappiness=0
# 編輯 fstab 配置檔案,注釋掉辨別為 Swap 裝置的所有行
vi /etc/fstab
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1003.jpg)
設定系統參數
設定允許路由轉發,不對bridge的資料進行處理
建立 /etc/sysctl.d/k8s.conf 檔案
vim /etc/sysctl.d/k8s.conf
加入下面内容:
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
挂載br_netfilter
modprobe br_netfilter
生效配置檔案
sysctl -p /etc/sysctl.d/k8s.conf
sysctl指令:用于運作時配置核心參數
檢視是否生成相關檔案
ls /proc/sys/net/bridge
資源配置檔案
/etc/security/limits.conf 是 Linux 資源使用配置檔案,用來限制使用者對系統資源的使用
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
安裝依賴包以及相關工具
yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl
三、安裝Keepalived
- keepalived介紹: 是叢集管理中保證叢集高可用的一個服務軟體,其功能類似于heartbeat,用來防止單點故障
- Keepalived作用: 為haproxy提供vip(192.168.2.10)在三個haproxy執行個體之間提供主備,降低當其中一個haproxy失效的時對服務的影響。
1、yum安裝Keepalived
# 安裝keepalived
yum install -y keepalived
2、配置Keepalived
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived
# 主要是配置故障發生時的通知對象以及機器辨別。
global_defs {
# 辨別本節點的字條串,通常為 hostname,但不一定非得是 hostname。故障發生時,郵件通知會用到。
router_id LVS_k8s
}
# 用來做健康檢查的,當時檢查失敗時會将 vrrp_instance 的 priority 減少相應的值。
vrrp_script check_haproxy {
script "killall -0 haproxy" #根據程序名稱檢測程序是否存活
interval 3
weight -2
fall 10
rise 2
}
# rp_instance用來定義對外提供服務的 VIP 區域及其相關屬性。
vrrp_instance VI_1 {
state MASTER #目前節點為MASTER,其他兩個節點設定為BACKUP
interface ens33 #改為自己的網卡
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
192.168.2.10 #虛拟ip,即VIP
}
track_script {
check_haproxy
}
}
EOF
目前節點的配置中 state 配置為 MASTER,其它兩個節點設定為 BACKUP
配置說明:
- virtual_ipaddress: vip
- track_script: 執行上面定義好的檢測的script
- interface: 節點固有IP(非VIP)的網卡,用來發VRRP包。
- virtual_router_id: 取值在0-255之間,用來區分多個instance的VRRP多點傳播
- advert_int: 發VRRP包的時間間隔,即多久進行一次master選舉(可以認為是健康查檢時間間隔)。
- authentication: 認證區域,認證類型有PASS和HA(IPSEC),推薦使用PASS(密碼隻識别前8位)。
- state: 可以是MASTER或BACKUP,不過當其他節點keepalived啟動時會将priority比較大的節點選舉為MASTER,是以該項其實沒有實質用途。
- priority: 用來選舉master的,要成為master,那麼這個選項的值最好高于其他機器50個點,該項取值範圍是1-255(在此範圍之外會被識别成預設值100)。
3、啟動Keepalived
# 設定開機啟動
systemctl enable keepalived
# 啟動keepalived
systemctl start keepalived
# 檢視啟動狀态
systemctl status keepalived
4、檢視網絡狀态
kepplived 配置中 state 為 MASTER 的節點啟動後,檢視網絡狀态,可以看到虛拟IP已經加入到綁定的網卡中
ip address show ens33
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1004.jpg)
當關掉目前節點的keeplived服務後将進行虛拟IP轉移,将會推選state 為 BACKUP 的節點的某一節點為新的MASTER,可以在那台節點上檢視網卡,将會檢視到虛拟IP
四、安裝haproxy
此處的haproxy為apiserver提供反向代理,haproxy将所有請求輪詢轉發到每個master節點上。相對于僅僅使用keepalived主備模式僅單個master節點承載流量,這種方式更加合理、健壯。
1、yum安裝haproxy
yum install -y haproxy
2、配置haproxy
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 192.168.2.11:6443 check
server master02.k8s.io 192.168.2.12:6443 check
server master03.k8s.io 192.168.2.13:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats
EOF
haproxy配置在其他master節點上(192.168.2.12和192.168.2.13)相同
3、啟動并檢測haproxy
# 設定開機啟動
systemctl enable haproxy
# 開啟haproxy
systemctl start haproxy
# 檢視啟動狀态
systemctl status haproxy
4、檢測haproxy端口
ss -lnt | grep -E "16443|1080"
顯示:
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1005.jpg)
五、安裝Docker (所有節點)
1、移除之前安裝過的Docker
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-ce-cli \
docker-engine
檢視還有沒有存在的docker元件
rpm -qa|grep docker
有則通過指令 yum -y remove XXX 來删除,比如:
yum remove docker-ce-cli
2、配置docker的yum源
下面兩個鏡像源選擇其一即可,由于官方下載下傳速度比較慢,推薦用阿裡鏡像源
- 阿裡鏡像源
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
- Docker官方鏡像源
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
2、安裝Docker:
顯示docker-ce所有可安裝版本:
yum list docker-ce --showduplicates | sort -r
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1006.jpg)
安裝指定docker版本
sudo yum install docker-ce-18.06.1.ce-3.el7 -y
設定鏡像存儲目錄
找到大點的挂載的目錄進行存儲
# 修改docker配置
vi /lib/systemd/system/docker.service
找到這行,王後面加上存儲目錄,例如這裡是 --graph /apps/docker
ExecStart=/usr/bin/docker --graph /apps/docker
啟動docker并設定docker開機啟動
systemctl enable docker
systemctl start docker
确認一下iptables
确認一下iptables filter表中FOWARD鍊的預設政策(pllicy)為ACCEPT。
iptables -nvL
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Docker從1.13版本開始調整了預設的防火牆規則,禁用了iptables filter表中FOWARD鍊,這樣會引起Kubernetes叢集中跨Node的Pod無法通信。但這裡通過安裝docker 1806,發現預設政策又改回了ACCEPT,這個不知道是從哪個版本改回的,因為我們線上版本使用的1706還是需要手動調整這個政策的。
六、安裝kubeadm、kubelet
1、配置可用的國内yum源用于安裝:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2、安裝kubelet
- 需要在每台機器上都安裝以下的軟體包:
- kubeadm: 用來初始化叢集的指令。
- kubelet: 在叢集中的每個節點上用來啟動 pod 和 container 等。
- kubectl: 用來與叢集通信的指令行工具。
檢視kubelet版本清單
yum list kubelet --showduplicates | sort -r
安裝kubelet
yum install -y kubelet-1.13.4-0
啟動kubelet并設定開機啟動
systemctl enable kubelet
systemctl start kubelet
檢查狀态
檢查狀态,發現是failed狀态,正常,kubelet會10秒重新開機一次,等初始化master節點後即可正常
systemctl status kubelet
3、安裝kubeadm
負責初始化叢集
檢視kubeadm版本清單
yum list kubeadm --showduplicates | sort -r
安裝kubeadm
yum install -y kubeadm-1.13.4-0
安裝 kubeadm 時候會預設安裝 kubectl ,是以不需要單獨安裝kubectl
4、重新開機伺服器
為了防止發生某些未知錯誤,這裡我們重新開機下伺服器,友善進行後續操作
reboot
七、初始化第一個kubernetes master節點
因為需要綁定虛拟IP,是以需要首先先檢視虛拟IP啟動這幾台master機子哪台上
ip address show ens33
ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7e:65:b3 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.11/24 brd 192.168.2.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.2.10/32 scope global ens33
valid_lft forever preferred_lft forever
可以看到 10虛拟ip 和 11的ip 在一台機子上,是以初始化kubernetes第一個master要在master01機子上進行安裝
1、建立kubeadm配置的yaml檔案
cat > kubeadm-config.yaml << EOF
apiServer:
certSANs:
- k8s-master-01
- k8s-master-02
- k8s-master-03
- master.k8s.io
- 192.168.2.10
- 192.168.2.11
- 192.168.2.12
- 192.168.2.13
- 127.0.0.1
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "master.k8s.io:16443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: 10.20.0.0/16
serviceSubnet: 10.10.0.0/16
scheduler: {}
EOF
以下兩個地方設定: - certSANs: 虛拟ip位址(為了安全起見,把所有叢集位址都加上) - controlPlaneEndpoint: 虛拟IP:監控端口号
- imageRepository: registry.aliyuncs.com/google_containers (使用阿裡雲鏡像倉庫)
- podSubnet: 10.20.0.0/16 (pod位址池)
- serviceSubnet: 10.10.0.0/16
#service位址池
2、初始化第一個master節點
kubeadm init --config kubeadm-config.yaml
日志:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join master.k8s.io:16443 --token dm3cw1.kw4hq84ie1376hji --discovery-token-ca-cert-hash sha256:f079b624773145ba714b56e177f52143f90f75a1dcebabda6538a49e224d4009
在此處看日志可以知道,通過
kubeadm join master.k8s.io:16443 --token dm3cw1.kw4hq84ie1376hji --discovery-token-ca-cert-hash sha256:f079b624773145ba714b56e177f52143f90f75a1dcebabda6538a49e224d4009
來讓節點加入叢集
3、配置kubectl環境變量
配置環境變量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4、檢視元件狀态
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
檢視pod狀态
kubectl get pods --namespace=kube-system
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1007.jpg)
可以看到coredns沒有啟動,這是由于還沒有配置網絡插件,接下來配置下後再重新檢視啟動狀态
八、安裝網絡插件
1、配置flannel插件的yaml檔案
cat > kube-flannel.yaml << EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.20.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
EOF
“Network”: “10.20.0.0/16”要和kubeadm-config.yaml配置檔案中podSubnet: 10.20.0.0/16相同
2、建立flanner相關role和pod
kubectl apply -f kube-flannel.yaml
等待一會時間,再次檢視各個pods的狀态
kubectl get pods --namespace=kube-system
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1008.jpg)
可以看到coredns已經啟動
九、加入叢集
1、Master加入叢集構成高可用
複制秘鑰到各個節點
在master01 伺服器上執行下面指令,将kubernetes相關檔案複制到 master02、master03
如果其他節點為初始化第一個master節點,則将該節點的配置檔案複制到其餘兩個主節點,例如master03為第一個master節點,則将它的k8s配置複制到master02和master01。
- 複制檔案到 master02
ssh [email protected] mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf [email protected]:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd
- 複制檔案到 master03
ssh [email protected] mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf [email protected]:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} [email protected]:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* [email protected]:/etc/kubernetes/pki/etcd
- master節點加入叢集
master02 和 master03 伺服器上都執行加入叢集操作
kubeadm join master.k8s.io:16443 --token dm3cw1.kw4hq84ie1376hji --discovery-token-ca-cert-hash sha256:f079b624773145ba714b56e177f52143f90f75a1dcebabda6538a49e224d4009 --experimental-control-plane
如果加入失敗想重新嘗試,請輸入 kubeadm reset 指令清除之前的設定,重新執行從“複制秘鑰”和“加入叢集”這兩步
顯示安裝過程:
......
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Master label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
- 配置kubectl環境變量
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
2、node節點加入叢集
除了讓master節點加入叢集組成高可用外,slave節點也要加入叢集中。
這裡将k8s-node-01、k8s-node-02加入叢集,進行工作
輸入初始化k8s master時候提示的加入指令,如下:
kubeadm join master.k8s.io:16443 --token dm3cw1.kw4hq84ie1376hji --discovery-token-ca-cert-hash sha256:f079b624773145ba714b56e177f52143f90f75a1dcebabda6538a49e224d4009
3、如果忘記加入叢集的token和sha256 (如正常則跳過)
- 顯示擷取token清單
kubeadm token list
預設情況下 Token 過期是時間是24小時,如果 Token 過期以後,可以輸入以下指令,生成新的 Token
kubeadm token create
- 擷取ca證書sha256編碼hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
- 拼接指令
kubeadm join master.k8s.io:16443 --token 882ik4.9ib2kb0eftvuhb58 --discovery-token-ca-cert-hash sha256:0b1a836894d930c8558b350feeac8210c85c9d35b6d91fde202b870f3244016a
如果是master加入,請在最後面加上 –experimental-control-plane 這個參數
4、檢視各個節點加入叢集情況
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master-01 Ready master 12m v1.13.4 192.168.2.11 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://18.6.1
k8s-master-02 Ready master 10m v1.13.4 192.168.2.12 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://18.6.1
k8s-master-03 Ready master 38m v1.13.4 192.168.2.13 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://18.6.1
k8s-node-01 Ready <none> 68s v1.13.4 192.168.2.21 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://18.6.1
k8s-node-02 Ready <none> 61s v1.13.4 192.168.2.22 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 docker://18.6.1
十、從叢集中删除 Node
- Master節點:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
- slave節點
kubeadm reset
十一、配置dashboard
這個在一個伺服器上部署,其他伺服器會複制這個部署的pod,是以這裡在master01伺服器上部署 dashboard
1、建立 dashboard.yaml 并啟動
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
# 1.修改了鏡像倉庫位置,編輯成自己的鏡像倉庫
# 2.變更了鏡像拉去政策imagePullPolicy: IfNotPresent
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
# 增加了nodePort,使得能夠通路,改變預設的type類型ClusterIP,變為NodePort
# 如果不配置的話預設隻能叢集内通路
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
運作 dashboard
kubectl create -f kubernetes-dashboard.yaml
2、Dashboard 建立 ServiceAccount 并綁定 Admin 角色
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
運作dashboard的使用者和角色綁定
kubectl create -f dashboard-user-role.yaml
擷取登陸token
kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system
[root@k8s-master-01 local]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
Name: admin-token-2mfdz
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 74efd994-38d8-11e9-8740-000c299624e4
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1qdjd4ayIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImM4ZTMxYzk0LTQ2MWEtMTFlOS1iY2M5LTAwMGMyOTEzYzUxZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.TNw1iFEsZmJsVG4cki8iLtEoiY1pjpnOYm8ZIFjctpBdTOw6kUMvv2b2B2BJ_5rFle31gqGAZBIRyYj9LPAs06qT5uVP_l9o7IyFX4HToBF3veiun4e71822eQRUsgqiPh5uSjKXEkf9yGq9ujiCdtzFxnp3Pnpeuge73syuwd7J6F0-dJAp3b48MLZ1JJwEo6CTCMhm9buysycUYTbT_mUDQMNrHVH0868CdN_H8azA4PdLLLrFfTiVgoGu4c3sG5rgh9kKFqZA6dzV0Kq10W5JJwJRM1808ybLHyV9jfKN8N2_lZ7ehE6PbPU0cV-PyP74iA-HrzFW1yVwSLPVYA
3、運作dashboard并登陸
輸入位址:https://192.168.2.10:30001 進入 dashboard 界面
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1009.jpg)
這裡輸入上面擷取的 token 進入 dashboard
![img](通過 Kubeadm 安裝 K8S 與高可用.assets/kubernetes-install-1010.jpg)
問題
1、Master不會參與負載工作
Master不會參與負載工作,如何讓其參加,這裡需要了解traint
檢視traint
# 檢視全部節點是否能被安排工作
kubectl describe nodes | grep -E '(Roles|Taints)'
删除traint
# 所有node都可以排程
kubectl taint nodes --all node-role.kubernetes.io/master-
# 指定node可以排程
kubectl taint nodes k8s-master-01 node-role.kubernetes.io/master-
2、重新加入叢集
network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
Back-off restarting failed container
#重置kubernetes服務,重置網絡。删除網絡配置,link
kubeadm reset
#重新開機kubelet
systemctl stop kubelet
#停止docker
systemctl stop docker
#重置cni
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
#重新開機docker
systemctl start docker
kubeadm join cluster.kube.com:16443 --token gaeyou.k2650x660c8eb98c --discovery-token-ca-cert-hash sha256:daf4c2e0264422baa7076a2587f9224a5bd9c5667307927b0238743799dfb362