天天看點

kubernetes的二進制安裝

文章目錄

          • 伺服器環境
          • `二進制安裝kubernetes叢集準備工作`
          • `安裝Master元件kube-apiserver、kube-controller-manager、kube-scheduler`
          • `安裝Node節點應用`
            • `安裝kubelet`
            • `安裝kube-proxy`
          • `将node節點加入master`
          • `安裝CNI插件`
          • `安裝kube-flannel`
          • `對kubernetes使用者進行授權(示例)`
          • kubectl工具遠端連接配接叢集
伺服器環境
伺服器IP 節點名稱 元件
192.168.10.42 k8s-master etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy、CNI插件
192.168.10.43 k8s-node01 etcd、kubelet、kube-proxy、CNI插件
192.168.10.44 k8s-node02 etcd、kubelet、kube-proxy、CNI插件

二進制安裝kubernetes叢集準備工作

  1. 系統初始化

關閉防火牆

systemctl stop firewalld
systemctl disable firewalld
           

關閉swap

swapoff -a     # 臨時關閉
vi /etc/fstab   # 永久關閉
           

添加hosts

vi /etc/hosts
           
192.168.10.42  k8s-master
192.168.10.43  k8s-node01
192.168.10.44  k8s-node02
           

同步系統時間

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
yum install -y ntpdate ntp
vim /etc/ntp.conf
           
server ntp1.aliyun.com
           

開啟核心參數

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl -p
           

開啟ipvs

yum -y install ipvsadm  ipset

# 臨時生效
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# 永久生效
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
           
  1. 生成 ETCD, K8S 的TLS證書

使用cfssl生成SSL證書

  1. 建立etcd.conf
#[Member]
ETCD_NAME=etcd-1
ETCD_DATA_DIR=/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLS=https://192.168.10.42:2380
ETCD_LISTEN_CLIENT_URLS=https://192.168.10.42:2379

#[Clustering]
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.10.42:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.10.42:2380
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.10.42:2380,etcd-2=https://192.168.10.43:2380,etcd-3=https://192.168.10.44:2380"
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER_STATE=new

# [security]
ETCD_CERT_FILE="/opt/kubernetes/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/opt/kubernetes/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/opt/kubernetes/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/opt/kubernetes/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"
           
  1. 建立etcd.service啟動檔案,将檔案移動至 /usr/lib/systemd/system 檔案夾下
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/etcd/conf/etcd.conf
WorkingDirectory=/opt/kubernetes/etcd
PermissionsStartOnly=true
ExecStart=/opt/kubernetes/etcd/bin/etcd
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
           
  1. 啟動etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
           
  1. 檢查etcd叢集健康狀态
etcdctl --ca-file=ca.pem --cert-file=etcd.pem --key-file=etcd-key.pem --endpoints=https://192.168.10.42:2379,https://192.168.10.43:2379,https://192.168.10.44:2379 cluster-health
           

安裝Master元件kube-apiserver、kube-controller-manager、kube-scheduler

  • 仿照ETCD生成SSL證書
  • 安裝kube-apiserver
  1. 建立kube-apiserver.service
[Unit]
Description=Kubernetes API Server

[Service]
EnvironmentFile=/opt/kubernetes/kube-apiserver/conf/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
           
  1. 建立kube-apiserver.conf
  • 為kubelet TLS Bootstrapping授權

  1. token可自行替換生成
# cat token.csv
cf4df8c2ebdcbf6246057eb96b50c98a,kubelet-bootstrap,10001,"system:node-bootstrapper"

格式: token,使用者,UID,使用者組
           
  1. 給kubelet-bootstrap授權
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
           
  • 安裝kube-controller-manager
  1. 建立kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager Server

[Service]
EnvironmentFile=/opt/kubernetes/kube-controller-manager/conf/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
           
  1. 建立kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --master=127.0.0.1:8080 --address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 --service-cluster-ip-range=10.0.0.0/24 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s"
           
  • 安裝kube-scheduler
  1. 建立kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Server

[Service]
EnvironmentFile=/opt/kubernetes/kube-scheduler/conf/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
           
  1. 建立kube-scheduler.conf

啟動并檢查kubernetes Master節點運作狀态

systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager
           
# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}
           

安裝Node節點應用

  • node節點配置檔案根據字尾名不同,作用不同
conf : 基本配置檔案
kubeconfig : 連接配接apiserver的配置檔案
yml : 正常主要配置檔案(用于動态配置更新)
           
  • 安裝kubelet

  1. 建立kubelet.conf
  1. 建立bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.10.42:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: cf4df8c2ebdcbf6246057eb96b50c98a
           
  1. 建立kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
           
  1. 建立kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/conf/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
           
  • 安裝kube-proxy

  1. 建立kube-proxy.conf
  1. 建立kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.10.42:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
           
  1. 建立kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/conf/kube-proxy.kubeconfig
hostnameOverride: k8s-02
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
           
  1. 建立kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/conf/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
           

将node節點加入master

  • 啟動kubelet
systemctl start kubelet
           
  • master節點檢視csr
kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-FcpbcWFOuED8FlAIBPUJcxS_6XctWdBy4ljBlboXwSU   52s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
           
kubectl certificate approve node-csr-FcpbcWFOuED8FlAIBPUJcxS_6XctWdBy4ljBlboXwSU
           

安裝CNI插件

  • 下載下傳cni插件
wget https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-amd64-v0.8.7.tgz
           
  • 建立cni工作目錄及配置目錄
mkdir -p /opt/cni/bin
mkdir -p /opt/cni/net.d
           

安裝kube-flannel

  • 下載下傳kube-flannel.yml
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
           
  • 修改kube-flannel.yml中net-conf字段,Network網段與kube-controller-manager中配置的cluster-cidr網段一緻
net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
           

對kubernetes使用者進行授權(示例)

  • 建立rbac授權檔案

    apiserver-to-kubelet-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/status
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserer
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
           
kubectl工具遠端連接配接叢集
  • 生成管理者證書
vim admin-csr.json
           
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "JiangSu",
      "ST": "JiangSu",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
           
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
           
  • 建立kubeconfig檔案
USERNAME="admin"
APISERVER="https://192.168.10.42:6443"
CA_FILE="/opt/kubernetes/ssl/ca.pem"
CA_KEY_FILE="/opt/kubernetes/ssl/ca-key.pem"
           
kubectl config  set-cluster ${USERNAME} --certificate-authority=${CA_FILE} --embed-certs=true --server=${APISERVER} --kubeconfig=${USERNAME}.conf
kubectl config set-credentials ${USERNAME} --client-certificate=${USERNAME}.crt --client-key=${USERNAME}.key --embed-certs=true --kubeconfig=${USERNAME}.conf
kubectl config set-context ${USERNAME}-context@${USERNAME} --cluster=${USERNAME} --user=${USERNAME} --kubeconfig=${USERNAME}.conf
kubectl config use-context ${USERNAME}-context@${USERNAME} --kubeconfig=${USERNAME}.conf
kubectl create clusterrolebinding ${USERNAME} --clusterrole=cluster-admin --user=${USERNAME}
           

繼續閱讀