文章目錄
- 1. Kubernetes 上部署留言闆示例
- 1.1 檢查叢集
- 1.2 建立rc
- 1.3 Redis 主服務
- 1.4 rc slave Pod
- 1.5 Redis slave service
- 1.6 前端 rc
- 1.7 Guestbook Frontend Service
- 1.8 Access Guestbook Frontend
- 2. 網絡介紹
- 2.1 叢集 IP
- 2.2 targetport
- 2.3 nodeport
- 2.4 External IPs
- 2.5 Load Balancer
- 3. Create Ingress Routing
- 3.1 建立http部署
- 3.2 部署 Ingress
- 3.3 Deploy Ingress Rules
- 3.4 測試
- 4. Liveness and Readiness Healthchecks
- 4.1 建立http應用程式
- 4.2 Readiness Probe
- 4.3 Liveness Probe
kubernetes實戰練習1kubernetes實戰練習2kubernetes實戰練習3kubernetes 快速學習手冊
1. Kubernetes 上部署留言闆示例
此場景說明如何使用 Kubernetes 和 Docker 啟動簡單的多層 Web 應用程式。留言簿示例應用程式通過 JavaScript API 調用将訪客的筆記存儲在 Redis 中。Redis 包含一個 master(用于存儲)和一組複制的 redis ‘slaves’。
1.1 檢查叢集
controlplane $ kubectl cluster-info
Kubernetes master is running at https://172.17.0.29:6443
KubeDNS is running at https://172.17.0.29:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
controlplane $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready master 2m57s v1.14.0
node01 Ready 2m31s v1.14.0
1.2 建立rc
controlplane $ cat redis-master-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
name: redis-master
spec:
replicas: 1
selector:
name: redis-master
template:
metadata:
labels:
name: redis-master
spec:
containers:
- name: master
image: redis:3.0.7-alpine
ports:
- containerPort: 6379
建立
controlplane $ kubectl create -f redis-master-controller.yaml
replicationcontroller/redis-master created
controlplane $ kubectl get rc
NAME DESIRED CURRENT READY AGE
redis-master 1 1 0 2s
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-2j4qm 1/1 Running 0 4s
1.3 Redis 主服務
第二部分是服務。Kubernetes 服務是一種命名負載均衡器,它将流量代理到一個或多個容器。即使容器位于不同的節點上,代理也能工作。
服務代理在叢集内通信,很少将端口暴露給外部接口。
當您啟動服務時,您似乎無法使用 curl 或 netcat 進行連接配接,除非您将其作為 Kubernetes 的一部分啟動。推薦的方法是使用 LoadBalancer 服務來處理外部通信。]
controlplane $ cat redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
name: redis-master
spec:
ports:
# the port that this service should serve on
- port: 6379
targetPort: 6379
selector:
name: redis-master
建立
controlplane $ kubectl create -f redis-master-service.yaml
service/redis-master created
controlplane $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 6m58s
redis-master ClusterIP 10.111.64.45 6379/TCP 1s
controlplane $ kubectl describe services redis-master
Name: redis-master
Namespace: default
Labels: name=redis-master
Annotations:
Selector: name=redis-master
Type: ClusterIP
IP: 10.111.64.45
Port: 6379/TCP
TargetPort: 6379/TCP
Endpoints: 10.32.0.193:6379
Session Affinity: None
Events:
1.4 rc slave Pod
在這個例子中,我們将運作 Redis Slaves,它會從 master 複制資料。有關 Redis 複制的更多詳細資訊,請通路http://redis.io/topics/replication
如前所述,控制器定義了服務的運作方式。在這個例子中,我們需要确定服務如何發現其他 pod。YAML 将
GET_HOSTS_FROM
屬性表示為 DNS。您可以将其更改為在 yaml 中使用環境變量,但這會引入建立順序依賴關系,因為需要運作服務才能定義環境變量。
在這種情況下,我們将使用image:
kubernetes/redis-slave:v2
啟動 pod 的兩個執行個體。它将通過 DNS連結到
redis-master
。
controlplane $ cat redis-slave-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
replicas: 2
selector:
name: redis-slave
template:
metadata:
labels:
name: redis-slave
spec:
containers:
- name: worker
image: gcr.io/google_samples/gb-redisslave:v1
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access an environment variable to find the master
# service's host, comment out the 'value: dns' line above, and
# uncomment the line below.
# value: env
ports:
- containerPort: 6379
執行:
controlplane $ kubectl create -f redis-slave-controller.yaml
replicationcontroller/redis-slave created
controlplane $ kubectl get rc
NAME DESIRED CURRENT READY AGE
redis-master 1 1 1 4m29s
redis-slave 2 2 2 3s
1.5 Redis slave service
和以前一樣,我們需要讓我們的奴隸可以通路傳入的請求。這是通過啟動一個知道如何與redis-slave通信的服務來完成的。
因為我們有兩個複制的 Pod,該服務還将在兩個節點之間提供負載平衡。
controlplane $ cat redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
ports:
# the port that this service should serve on
- port: 6379
selector:
name: redis-slave
執行:
controlplane $ kubectl create -f redis-slave-service.yaml
controlplane $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 14m
redis-master ClusterIP 10.111.64.45 6379/TCP 7m13s
redis-slave ClusterIP 10.109.135.21 6379/TCP 41s
1.6 前端 rc
啟動資料服務後,我們現在可以部署 Web 應用程式。部署 Web 應用程式的模式與我們之前部署的 pod 相同。YAML 定義了一個名為 frontend 的服務,該服務使用圖像 _
gcr.io/google samples/gb-frontend:v3
。複制控制器将確定三個 Pod 始終存在。
controlplane $ cat frontend-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
執行
controlplane $ kubectl create -f frontend-controller.yaml
replicationcontroller/frontend created
controlplane $ kubectl get rc
NAME DESIRED CURRENT READY AGE
frontend 3 3 1 2s
redis-master 1 1 1 20m
redis-slave 2 2 2 15m
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-bkcsj 1/1 Running 0 3s
frontend-ftjrk 1/1 Running 0 3s
frontend-jnckp 1/1 Running 0 3s
redis-master-2j4qm 1/1 Running 0 20m
redis-slave-79w2b 1/1 Running 0 15m
redis-slave-j8zqj 1/1 Running 0 15m
PHP 代碼使用 HTTP 和 JSON 與 Redis 通信。當設定一個值時,請求轉到
redis-master
,而讀取的資料來自
redis-slave
節點。
1.7 Guestbook Frontend Service
為了使前端可通路,我們需要啟動一個服務來配置代理。
YAML 将服務定義為NodePort。NodePort 允許您設定在整個叢集中共享的知名端口。這就像Docker 中的
-p 80:80
。
在這種情況下,我們定義我們的 Web 應用程式在端口 80 上運作,但我們将在30080上公開服務。
controlplane $ cat frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
type: NodePort
ports:
# the port that this service should serve on
- port: 80
nodePort: 30080
selector:
name: frontend
controlplane $ kubectl create -f frontend-service.yaml
service/frontend created
controlplane $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.105.214.152 80:30080/TCP 2s
kubernetes ClusterIP 10.96.0.1 443/TCP 28m
redis-master ClusterIP 10.111.64.45 6379/TCP 21m
redis-slave ClusterIP 10.109.135.21 6379/TCP 15m
1.8 Access Guestbook Frontend
定義了所有控制器和服務後,Kubernetes 将開始将它們作為 Pod 啟動。根據發生的情況,Pod 可以具有不同的狀态。例如,如果 Docker 鏡像仍在下載下傳,則 Pod 将處于挂起狀态,因為它無法啟動。準備就緒後,狀态将更改為running。
檢視 Pod 狀态
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-bkcsj 1/1 Running 0 4m55s
frontend-ftjrk 1/1 Running 0 4m55s
frontend-jnckp 1/1 Running 0 4m55s
redis-master-2j4qm 1/1 Running 0 24m
redis-slave-79w2b 1/1 Running 0 20m
redis-slave-j8zqj 1/1 Running 0 20m
查找節點端口
controlplane $ kubectl describe service frontend | grep NodePort
Type: NodePort
NodePort: 30080/TCP
檢視使用者界面
一旦 Pod 處于運作狀态,您将能夠通過端口 30080 檢視 UI。使用 URL 檢視頁面
https://2886795293-30080-elsy05.environments.katacoda.com
在幕後,PHP 服務通過 DNS 發現 Redis 執行個體。您現在已經在 Kubernetes 上部署了一個有效的多層應用程式。
2. 網絡介紹
Kubernetes 具有先進的網絡功能,允許 Pod 和服務在叢集網絡内部和外部進行通信。
在此場景中,您将學習以下類型的 Kubernetes 服務。
- 叢集IP
- 目标端口
- 節點端口
- 外部 IP
- 負載均衡器
Kubernetes 服務是一個抽象,它定義了如何通路一組 Pod 的政策和方法。通過 Service 通路的 Pod 集基于标簽選擇器。
2.1 叢集 IP
叢集 IP 是建立 Kubernetes 服務時的預設方法。該服務被配置設定了一個内部 IP,其他元件可以使用它來通路 pod。
通過擁有單個 IP 位址,它可以使服務在多個 Pod 之間進行負載平衡。
controlplane $ cat clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-clusterip-svc
labels:
app: webapp1-clusterip
spec:
ports:
- port: 80
selector:
app: webapp1-clusterip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-clusterip-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-clusterip
spec:
containers:
- name: webapp1-clusterip-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
controlplane $ kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp1-clusterip-deployment-669c7c65c4-gqlkc 1/1 Running 0 112s
webapp1-clusterip-deployment-669c7c65c4-hwkrl 1/1 Running 0 112s
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 6m28s
webapp1-clusterip-svc ClusterIP 10.100.49.56 80/TCP 116s
controlplane $ kubectl describe svc/webapp1-clusterip-svc
Name: webapp1-clusterip-svc
Namespace: default
Labels: app=webapp1-clusterip
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip"},"name":"webapp1-clusterip-svc","name...
Selector: app=webapp1-clusterip
Type: ClusterIP
IP: 10.100.49.56
Port: 80/TCP
TargetPort: 80/TCP
Endpoints: 10.32.0.5:80,10.32.0.6:80
Session Affinity: None
Events:
controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-svc -o go-template='{{(index .spec.clusterIP)}}')
controlplane $ echo CLUSTER_IP=$CLUSTER_IP
CLUSTER_IP=10.100.49.56
controlplane $ curl $CLUSTER_IP:80
This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc
controlplane $ curl $CLUSTER_IP:80
This request was processed by host: webapp1-clusterip-deployment-669c7c65c4-gqlkc
多個請求将展示基于公共标簽選擇器的跨多個 Pod 的服務負載均衡器。
2.2 targetport
目标端口允許我們将服務可用的端口與應用程式正在偵聽的端口分開。TargetPort 是應用程式配置為偵聽的端口。 Port是從外部通路應用程式的方式。
controlplane $ cat clusterip-target.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-clusterip-targetport-svc
labels:
app: webapp1-clusterip-targetport
spec:
ports:
- port: 8080
targetPort: 80
selector:
app: webapp1-clusterip-targetport
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-clusterip-targetport-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-clusterip-targetport
spec:
containers:
- name: webapp1-clusterip-targetport-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
controlplane $ kubectl apply -f clusterip-target.yaml
service/webapp1-clusterip-targetport-svc created
deployment.extensions/webapp1-clusterip-targetport-deployment created
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 11m
webapp1-clusterip-svc ClusterIP 10.100.49.56 80/TCP 6m33s
webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 8080/TCP 2s
controlplane $ kubectl describe svc/webapp1-clusterip-targetport-svc
Name: webapp1-clusterip-targetport-svc
Namespace: default
Labels: app=webapp1-clusterip-targetport
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-clusterip-targetport"},"name":"webapp1-clusterip...
Selector: app=webapp1-clusterip-targetport
Type: ClusterIP
IP: 10.99.164.105
Port: 8080/TCP
TargetPort: 80/TCP
Endpoints: 10.32.0.7:80,10.32.0.8:80
Session Affinity: None
Events:
controlplane $ export CLUSTER_IP=$(kubectl get services/webapp1-clusterip-targetport-svc -o go-template='{{(index .spec.clusterIP)}}')
controlplane $ echo CLUSTER_IP=$CLUSTER_IP
CLUSTER_IP=10.99.164.105
controlplane $ curl $CLUSTER_IP:8080
This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k
controlplane $ curl $CLUSTER_IP:8080
This request was processed by host: webapp1-clusterip-targetport-deployment-5599945ff4-9n89k
controlplane $ curl $CLUSTER_IP:8080
服務和 pod 部署完成後,可以像以前一樣通過叢集 IP 通路它,但這次是在定義的端口 8080 上。應用程式本身仍然配置為偵聽端口 80。Kubernetes 服務管理着兩者之間的轉換。
2.3 nodeport
雖然
TargetPort
和
ClusterIP
使其可用于叢集内部,但 NodePort 通過定義的靜态端口在每個節點的 IP 上公開服務。無論通路叢集内的哪個節點,根據定義的端口号都可以通路該服務。
controlplane $ cat nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-nodeport-svc
labels:
app: webapp1-nodeport
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: webapp1-nodeport
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-nodeport-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-nodeport
spec:
containers:
- name: webapp1-nodeport-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
controlplane $ kubectl apply -f nodeport.yaml
service/webapp1-nodeport-svc created
deployment.extensions/webapp1-nodeport-deployment created
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 14m
webapp1-clusterip-svc ClusterIP 10.100.49.56 80/TCP 9m39s
webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 8080/TCP 3m8s
webapp1-nodeport-svc NodePort 10.111.226.228 80:30080/TCP 48s
controlplane $ kubectl describe svc/webapp1-nodeport-svc
Name: webapp1-nodeport-svc
Namespace: default
Labels: app=webapp1-nodeport
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-nodeport"},"name":"webapp1-nodeport-svc","namesp...
Selector: app=webapp1-nodeport
Type: NodePort
IP: 10.111.226.228
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 30080/TCP
Endpoints: 10.32.0.10:80,10.32.0.9:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
controlplane $ curl 172.17.0.66:30080
This request was processed by host: webapp1-nodeport-deployment-677bd89b96-hqdbb
2.4 External IPs
使服務在叢集外可用的另一種方法是通過外部 IP 位址。
将定義更新為目前叢集的 IP 位址。
controlplane $ cat externalip.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-externalip-svc
labels:
app: webapp1-externalip
spec:
ports:
- port: 80
externalIPs:
- HOSTIP
selector:
app: webapp1-externalip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-externalip-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-externalip
spec:
containers:
- name: webapp1-externalip-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
controlplane $ sed -i 's/HOSTIP/172.17.0.66/g' externalip.yaml
controlplane $ cat externalip.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-externalip-svc
labels:
app: webapp1-externalip
spec:
ports:
- port: 80
externalIPs:
- 172.17.0.66
selector:
app: webapp1-externalip
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-externalip-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-externalip
spec:
containers:
- name: webapp1-externalip-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
controlplane $ kubectl apply -f externalip.yaml
service/webapp1-externalip-svc created
deployment.extensions/webapp1-externalip-deployment created
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 16m
webapp1-clusterip-svc ClusterIP 10.100.49.56 80/TCP 11m
webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 8080/TCP 5m15s
webapp1-externalip-svc ClusterIP 10.101.221.229 172.17.0.66 80/TCP 2s
webapp1-nodeport-svc NodePort 10.111.226.228 80:30080/TCP 2m55s
controlplane $ kubectl describe svc/webapp1-externalip-svc
Name: webapp1-externalip-svc
Namespace: default
Labels: app=webapp1-externalip
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-externalip"},"name":"webapp1-externalip-svc","na...
Selector: app=webapp1-externalip
Type: ClusterIP
IP: 10.101.221.229
External IPs: 172.17.0.66
Port: 80/TCP
TargetPort: 80/TCP
Endpoints: 10.32.0.11:80
Session Affinity: None
Events:
controlplane $ curl 172.17.0.66
This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt
controlplane $ curl 172.17.0.66
This request was processed by host: webapp1-externalip-deployment-6446b488f8-tjrpt
2.5 Load Balancer
在雲中運作時,例如 EC2 或 Azure,可以配置和配置設定通過雲提供商釋出的公共 IP 位址。這将通過負載均衡器(例如 ELB)發出。這允許将額外的公共 IP 位址配置設定給 Kubernetes 叢集,而無需直接與雲提供商互動。
由于 Katacoda 不是雲提供商,是以仍然可以為
LoadBalancer
類型的服務動态配置設定 IP 位址。這是通過使用
controlplane $ cat cloudprovider.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-keepalived-vip
namespace: kube-system
spec:
template:
metadata:
labels:
name: kube-keepalived-vip
spec:
hostNetwork: true
containers:
- image: gcr.io/google_containers/kube-keepalived-vip:0.9
name: kube-keepalived-vip
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: modules
readOnly: true
- mountPath: /dev
name: dev
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# to use unicast
args:
- --services-configmap=kube-system/vip-configmap
# unicast uses the ip of the nodes instead of multicast
# this is useful if running in cloud providers (like AWS)
#- --use-unicast=true
volumes:
- name: modules
hostPath:
path: /lib/modules
- name: dev
hostPath:
path: /dev
nodeSelector:
# type: worker # adjust this to match your worker nodes
---
## We also create an empty ConfigMap to hold our config
apiVersion: v1
kind: ConfigMap
metadata:
name: vip-configmap
namespace: kube-system
data:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: keepalived-cloud-provider
name: keepalived-cloud-provider
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: keepalived-cloud-provider
strategy:
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
labels:
app: keepalived-cloud-provider
spec:
containers:
- name: keepalived-cloud-provider
image: quay.io/munnerz/keepalived-cloud-provider:0.0.1
imagePullPolicy: IfNotPresent
env:
- name: KEEPALIVED_NAMESPACE
value: kube-system
- name: KEEPALIVED_CONFIG_MAP
value: vip-configmap
- name: KEEPALIVED_SERVICE_CIDR
value: 10.10.0.0/26 # pick a CIDR that is explicitly reserved for keepalived
volumeMounts:
- name: certs
mountPath: /etc/ssl/certs
resources:
requests:
cpu: 200m
livenessProbe:
httpGet:
path: /healthz
port: 10252
host: 127.0.0.1
initialDelaySeconds: 15
timeoutSeconds: 15
failureThreshold: 8
volumes:
- name: certs
hostPath:
path: /etc/ssl/certs
controlplane $ kubectl apply -f cloudprovider.yaml
daemonset.extensions/kube-keepalived-vip configured
configmap/vip-configmap configured
deployment.apps/keepalived-cloud-provider created
controlplane $ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-9hrwv 1/1 Running 0 21m
coredns-fb8b8dccf-skwkj 1/1 Running 0 21m
etcd-controlplane 1/1 Running 0 20m
katacoda-cloud-provider-558d5c854b-6h955 1/1 Running 0 21m
keepalived-cloud-provider-78fc4468b-lpg9s 1/1 Running 0 2m41s
kube-apiserver-controlplane 1/1 Running 0 20m
kube-controller-manager-controlplane 1/1 Running 0 20m
kube-keepalived-vip-hq7hk 1/1 Running 0 21m
kube-proxy-468j8 1/1 Running 0 21m
kube-scheduler-controlplane 1/1 Running 0 20m
weave-net-w5zff 2/2 Running 1 21m
該服務是通過負載均衡器配置的
controlplane $ cat loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-loadbalancer-svc
labels:
app: webapp1-loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: webapp1-loadbalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1-loadbalancer-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp1-loadbalancer
spec:
containers:
- name: webapp1-loadbalancer-pod
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
controlplane $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 23m
webapp1-clusterip-svc ClusterIP 10.100.49.56 80/TCP 19m
webapp1-clusterip-targetport-svc ClusterIP 10.99.164.105 8080/TCP 12m
webapp1-externalip-svc ClusterIP 10.101.221.229 172.17.0.66 80/TCP 7m22s
webapp1-loadbalancer-svc LoadBalancer 10.104.93.133 172.17.0.66 80:31232/TCP 97s
webapp1-nodeport-svc NodePort 10.111.226.228 80:30080/TCP 10m
controlplane $ kubectl describe svc/webapp1-loadbalancer-svc
Name: webapp1-loadbalancer-svc
Namespace: default
Labels: app=webapp1-loadbalancer
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"webapp1-loadbalancer"},"name":"webapp1-loadbalancer-svc"...
Selector: app=webapp1-loadbalancer
Type: LoadBalancer
IP: 10.104.93.133
LoadBalancer Ingress: 172.17.0.66
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 31232/TCP
Endpoints: 10.32.0.14:80,10.32.0.15:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatingLoadBalancer 99s service-controller Creating load balancer
Normal CreatedLoadBalancer 99s service-controller Created load balancer
現在可以通過配置設定的 IP 位址通路該服務,在本例中為
10.10.0.0/26
範圍。
controlplane $ export LoadBalancerIP=$(kubectl get services/webapp1-loadbalancer-svc -o go-template='{{(index .status.loadBalancer.ingress 0).ip}}')
controlplane $ echo LoadBalancerIP=$LoadBalancerIP
LoadBalancerIP=172.17.0.66
controlplane $ curl $LoadBalancerIP
This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh
controlplane $ curl $LoadBalancerIP
This request was processed by host: webapp1-externalip-deployment-6446b488f8-xt4nh
3. Create Ingress Routing
Kubernetes 具有先進的網絡功能,允許 Pod 和服務在叢集網絡内部進行通信。Ingress 啟用到叢集的入站連接配接,允許外部流量到達正确的 Pod。
Ingress 啟用外部可通路的 url、負載平衡流量、終止 SSL、為 Kubernetes 叢集提供基于名稱的虛拟主機。
在此場景中,您将學習如何部署和配置 Ingress 規則來管理傳入的 HTTP 請求。
3.1 建立http部署
首先,部署一個示例 HTTP 伺服器,它将成為我們請求的目标。該部署包含三個部署,一個稱為webapp1,第二個稱為webapp2,第三個稱為webapp3,每個部署都有一個服務。
controlplane $ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp1
spec:
replicas: 1
selector:
matchLabels:
app: webapp1
template:
metadata:
labels:
app: webapp1
spec:
containers:
- name: webapp1
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp2
spec:
replicas: 1
selector:
matchLabels:
app: webapp2
template:
metadata:
labels:
app: webapp2
spec:
containers:
- name: webapp2
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp3
spec:
replicas: 1
selector:
matchLabels:
app: webapp3
template:
metadata:
labels:
app: webapp3
spec:
containers:
- name: webapp3
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webapp1-svc
labels:
app: webapp1
spec:
ports:
- port: 80
selector:
app: webapp1
---
apiVersion: v1
kind: Service
metadata:
name: webapp2-svc
labels:
app: webapp2
spec:
ports:
- port: 80
selector:
app: webapp2
---
apiVersion: v1
kind: Service
metadata:
name: webapp3-svc
labels:
app: webapp3
spec:
ports:
- port: 80
selector:
app: webapp3
controlplane $ kubectl apply -f deployment.yaml
deployment.apps/webapp1 created
deployment.apps/webapp2 created
deployment.apps/webapp3 created
service/webapp1-svc created
service/webapp2-svc created
service/webapp3-svc created
controlplane $ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
webapp1 0/1 1 0 4s
webapp2 0/1 1 0 4s
webapp3 0/1 1 0 4s
3.2 部署 Ingress
YAML 檔案
ingress.yaml
定義了一個基于
Nginx
的入口控制器以及一個服務,使其在端口 80 上可用于使用
ExternalIPs
的外部連接配接。如果 Kubernetes 叢集在雲提供商上運作,那麼它将使用
LoadBalancer
服務類型。
ServiceAccount 定義了具有如何通路叢集以通路定義的入口規則的一組權限的帳戶。預設伺服器密鑰是其他 Nginx 示例 SSL 連接配接的自簽名證書,并且是所必需的Nginx 預設示例。
controlplane $ cat ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress
---
apiVersion: v1
kind: Secret
metadata:
name: default-server-secret
namespace: nginx-ingress
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN2akNDQWFZQ0NRREFPRjl0THNhWFhEQU5CZ2txaGtpRzl3MEJBUXNGQURBaE1SOHdIUVlEVlFRRERCWk8KUjBsT1dFbHVaM0psYzNORGIyNTBjbTlzYkdWeU1CNFhEVEU0TURreE1qRTRNRE16TlZvWERUSXpNRGt4TVRFNApNRE16TlZvd0lURWZNQjBHQTFVRUF3d1dUa2RKVGxoSmJtZHlaWE56UTI5dWRISnZiR3hsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUwvN2hIUEtFWGRMdjNyaUM3QlBrMTNpWkt5eTlyQ08KR2xZUXYyK2EzUDF0azIrS3YwVGF5aGRCbDRrcnNUcTZzZm8vWUk1Y2Vhbkw4WGM3U1pyQkVRYm9EN2REbWs1Qgo4eDZLS2xHWU5IWlg0Rm5UZ0VPaStlM2ptTFFxRlBSY1kzVnNPazFFeUZBL0JnWlJVbkNHZUtGeERSN0tQdGhyCmtqSXVuektURXUyaDU4Tlp0S21ScUJHdDEwcTNRYzhZT3ExM2FnbmovUWRjc0ZYYTJnMjB1K1lYZDdoZ3krZksKWk4vVUkxQUQ0YzZyM1lma1ZWUmVHd1lxQVp1WXN2V0RKbW1GNWRwdEMzN011cDBPRUxVTExSakZJOTZXNXIwSAo1TmdPc25NWFJNV1hYVlpiNWRxT3R0SmRtS3FhZ25TZ1JQQVpQN2MwQjFQU2FqYzZjNGZRVXpNQ0F3RUFBVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWpLb2tRdGRPcEsrTzhibWVPc3lySmdJSXJycVFVY2ZOUitjb0hZVUoKdGhrYnhITFMzR3VBTWI5dm15VExPY2xxeC9aYzJPblEwMEJCLzlTb0swcitFZ1U2UlVrRWtWcitTTFA3NTdUWgozZWI4dmdPdEduMS9ienM3bzNBaS9kclkrcUI5Q2k1S3lPc3FHTG1US2xFaUtOYkcyR1ZyTWxjS0ZYQU80YTY3Cklnc1hzYktNbTQwV1U3cG9mcGltU1ZmaXFSdkV5YmN3N0NYODF6cFErUyt1eHRYK2VBZ3V0NHh3VlI5d2IyVXYKelhuZk9HbWhWNThDd1dIQnNKa0kxNXhaa2VUWXdSN0diaEFMSkZUUkk3dkhvQXprTWIzbjAxQjQyWjNrN3RXNQpJUDFmTlpIOFUvOWxiUHNoT21FRFZkdjF5ZytVRVJxbStGSis2R0oxeFJGcGZnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdi91RWM4b1JkMHUvZXVJTHNFK1RYZUprckxMMnNJNGFWaEMvYjVyYy9XMlRiNHEvClJOcktGMEdYaVN1eE9ycXgrajlnamx4NXFjdnhkenRKbXNFUkJ1Z1B0ME9hVGtIekhvb3FVWmcwZGxmZ1dkT0EKUTZMNTdlT1l0Q29VOUZ4amRXdzZUVVRJVUQ4R0JsRlNjSVo0b1hFTkhzbysyR3VTTWk2Zk1wTVM3YUhudzFtMApxWkdvRWEzWFNyZEJ6eGc2clhkcUNlUDlCMXl3VmRyYURiUzc1aGQzdUdETDU4cGszOVFqVUFQaHpxdmRoK1JWClZGNGJCaW9CbTVpeTlZTW1hWVhsMm0wTGZzeTZuUTRRdFFzdEdNVWozcGJtdlFmazJBNnljeGRFeFpkZFZsdmwKMm82MjBsMllxcHFDZEtCRThCay90elFIVTlKcU56cHpoOUJUTXdJREFRQUJBb0lCQVFDZklHbXowOHhRVmorNwpLZnZJUXQwQ0YzR2MxNld6eDhVNml4MHg4Mm15d1kxUUNlL3BzWE9LZlRxT1h1SENyUlp5TnUvZ2IvUUQ4bUFOCmxOMjRZTWl0TWRJODg5TEZoTkp3QU5OODJDeTczckM5bzVvUDlkazAvYzRIbjAzSkVYNzZ5QjgzQm9rR1FvYksKMjhMNk0rdHUzUmFqNjd6Vmc2d2szaEhrU0pXSzBwV1YrSjdrUkRWYmhDYUZhNk5nMUZNRWxhTlozVDhhUUtyQgpDUDNDeEFTdjYxWTk5TEI4KzNXWVFIK3NYaTVGM01pYVNBZ1BkQUk3WEh1dXFET1lvMU5PL0JoSGt1aVg2QnRtCnorNTZud2pZMy8yUytSRmNBc3JMTnIwMDJZZi9oY0IraVlDNzVWYmcydVd6WTY3TWdOTGQ5VW9RU3BDRkYrVm4KM0cyUnhybnhBb0dCQU40U3M0ZVlPU2huMVpQQjdhTUZsY0k2RHR2S2ErTGZTTXFyY2pOZjJlSEpZNnhubmxKdgpGenpGL2RiVWVTbWxSekR0WkdlcXZXaHFISy9iTjIyeWJhOU1WMDlRQ0JFTk5jNmtWajJTVHpUWkJVbEx4QzYrCk93Z0wyZHhKendWelU0VC84ajdHalRUN05BZVpFS2FvRHFyRG5BYWkyaW5oZU1JVWZHRXFGKzJyQW9HQkFOMVAKK0tZL0lsS3RWRzRKSklQNzBjUis3RmpyeXJpY05iWCtQVzUvOXFHaWxnY2grZ3l4b25BWlBpd2NpeDN3QVpGdwpaZC96ZFB2aTBkWEppc1BSZjRMazg5b2pCUmpiRmRmc2l5UmJYbyt3TFU4NUhRU2NGMnN5aUFPaTVBRHdVU0FkCm45YWFweUNweEFkREtERHdObit3ZFhtaTZ0OHRpSFRkK3RoVDhkaVpBb0dCQUt6Wis1bG9OOTBtYlF4VVh5YUwKMjFSUm9tMGJjcndsTmVCaWNFSmlzaEhYa2xpSVVxZ3hSZklNM2hhUVRUcklKZENFaHFsV01aV0xPb2I2NTNyZgo3aFlMSXM1ZUtka3o0aFRVdnpldm9TMHVXcm9CV2xOVHlGanIrSWhKZnZUc0hpOGdsU3FkbXgySkJhZUFVWUNXCndNdlQ4NmNLclNyNkQrZG8wS05FZzFsL0FvR0FlMkFVdHVFbFNqLzBmRzgrV3hHc1RFV1JqclRNUzRSUjhRWXQKeXdjdFA4aDZxTGxKUTRCWGxQU05rMXZLTmtOUkxIb2pZT2pCQTViYjhibXNVU1BlV09NNENoaFJ4QnlHbmR2eAphYkJDRkFwY0IvbEg4d1R0alVZYlN5T294ZGt5OEp0ek90ajJhS0FiZHd6NlArWDZDODhjZmxYVFo5MWpYL3RMCjF3TmRKS2tDZ1lCbyt0UzB5TzJ2SWFmK2UwSkN5TGhzVDQ5cTN3Zis2QWVqWGx2WDJ1VnRYejN5QTZnbXo5aCsKcDNlK2JMRUxwb3B0WFhNdUFRR0xhUkcrYlNNcjR5dERYbE5ZSndUeThXczNKY3dlSTdqZVp2b0ZpbmNvVlVIMwphdmxoTUVCRGYxSjltSDB5cDBwWUNaS2ROdHNvZEZtQktzVEtQMjJhTmtsVVhCS3gyZzR6cFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress
namespace: nginx-ingress
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
---
# Described at: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
# Source from: https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/common/ingress-class.yaml
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
# annotations:
# ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:edge
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
externalIPs:
- 172.17.0.88
Ingress
控制器以熟悉的方式部署到其他 Kubernetes 對象
controlplane $ kubectl create -f ingress.yaml
namespace/nginx-ingress created
secret/default-server-secret created
serviceaccount/nginx-ingress created
configmap/nginx-config created
ingressclass.networking.k8s.io/nginx created
deployment.apps/nginx-ingress created
service/nginx-ingress created
controlplane $ kubectl get deployment -n nginx-ingress
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress 0/1 1 0 4s
3.3 Deploy Ingress Rules
入口規則是 Kubernetes 的對象類型。規則可以基于請求主機(域),或請求的路徑,或兩者的組合。
controlplane $ cat ingress-rules.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webapp-ingress
spec:
ingressClassName: nginx
rules:
- host: my.kubernetes.example
http:
paths:
- path: /webapp1
backend:
serviceName: webapp1-svc
servicePort: 80
- path: /webapp2
backend:
serviceName: webapp2-svc
servicePort: 80
- backend:
serviceName: webapp3-svc
servicePort: 80
規則的重要部分定義如下。
這些規則适用于對主機
my.kubernetes.example
的請求。基于路徑請求定義了兩個規則,并使用一個
catch all
定義。對路徑
/webapp1
的請求被轉發到服務
webapp1-svc
上。同樣,對
/webapp2
的請求被轉發到
webapp2-svc
。如果沒有規則适用,将使用
webapp3-svc
。
這示範了應用程式的 URL 結構如何獨立于應用程式的部署方式。
controlplane $ kubectl create -f ingress-rules.yaml
ingress.extensions/webapp-ingress created
controlplane $ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
webapp-ingress nginx my.kubernetes.example 80 2s
3.4 測試
應用入口規則後,流量将被路由到定義的位置。
第一個請求将由webapp1部署處理。
curl -H "Host: my.kubernetes.example" 172.17.0.88/webapp1
第二個請求将由webapp2部署處理。
curl -H "Host: my.kubernetes.example" 172.17.0.88/webapp2
最後,所有其他請求将由webapp3部署處理。
curl -H "Host: my.kubernetes.example" 172.17.0.88
4. Liveness and Readiness Healthchecks
在此場景中,您将了解 Kubernetes 如何使用
Readiness and Liveness Probes
檢查容器運作狀況。
Readiness Probes
檢查應用程式是否準備好開始處理流量。此探針解決了容器已啟動的問題,但該程序仍在預熱和配置自身,這意味着它尚未準備好接收流量。
Liveness Probes
確定應用程式健康并能夠處理請求。
4.1 建立http應用程式
controlplane $ cat deploy.yaml
kind: List
apiVersion: v1
items:
- kind: ReplicationController
apiVersion: v1
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 1
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: frontend
image: katacoda/docker-http-server:health
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
- kind: ReplicationController
apiVersion: v1
metadata:
name: bad-frontend
labels:
name: bad-frontend
spec:
replicas: 1
selector:
name: bad-frontend
template:
metadata:
labels:
name: bad-frontend
spec:
containers:
- name: bad-frontend
image: katacoda/docker-http-server:unhealthy
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
- kind: Service
apiVersion: v1
metadata:
labels:
app: frontend
kubernetes.io/cluster-service: "true"
name: frontend
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: frontend
controlplane $ kubectl apply -f deploy.yaml
replicationcontroller/frontend created
replicationcontroller/bad-frontend created
service/frontend created
4.2 Readiness Probe
在部署叢集時,還部署了兩個 Pod 來示範健康檢查。
controlplane $ cat deploy.yaml
kind: List
apiVersion: v1
items:
- kind: ReplicationController
apiVersion: v1
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 1
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: frontend
image: katacoda/docker-http-server:health
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
- kind: ReplicationController
apiVersion: v1
metadata:
name: bad-frontend
labels:
name: bad-frontend
spec:
replicas: 1
selector:
name: bad-frontend
template:
metadata:
labels:
name: bad-frontend
spec:
containers:
- name: bad-frontend
image: katacoda/docker-http-server:unhealthy
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
- kind: Service
apiVersion: v1
metadata:
labels:
app: frontend
kubernetes.io/cluster-service: "true"
name: frontend
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
selector:
app: frontend
部署 Replication Controller 時,每個 Pod 都有一個 Readiness 和 Liveness 檢查。每個檢查都具有以下格式,用于通過 HTTP 執行健康檢查。
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 1
timeoutSeconds: 1
可以根據您的應用程式更改設定以調用不同的端點,例如 /ping。
第一個 Pod,
bad-frontend
是一個 HTTP 服務,它總是傳回
500
錯誤,表明它沒有正确啟動。您可以使用以下指令檢視 Pod 的狀态
controlplane $ kubectl get pods --selector="name=bad-frontend"
NAME READY STATUS RESTARTS AGE
bad-frontend-5p4k6 0/1 CrashLoopBackOff 4 2m55s
Kubectl 将傳回使用我們的特定标簽部署的 Pod。因為健康檢查失敗,它會說零容器已準備就緒。它還将訓示容器的重新開機嘗試次數。
controlplane $ kubectl describe pod $pod
Name: bad-frontend-5p4k6
Namespace: default
Priority: 0
PriorityClassName:
Node: controlplane/172.17.0.44
Start Time: Tue, 09 Nov 2021 15:44:19 +0000
Labels: name=bad-frontend
Annotations:
Status: Running
IP: 10.32.0.6
Controlled By: ReplicationController/bad-frontend
Containers:
bad-frontend:
Container ID: docker://ae3c84bfdaa178fe2976e8b075e4e98da95df06b6f5bd85ef2eb5f92466c5f5d
Image: katacoda/docker-http-server:unhealthy
Image ID: docker-pullable://katacoda/docker-http-server@sha256:bea95c69c299c690103c39ebb3159c39c5061fee1dad13aa1b0625e0c6b52f22
Port:
Host Port:
State: Running
Started: Tue, 09 Nov 2021 15:47:44 +0000
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 09 Nov 2021 15:46:34 +0000
Finished: Tue, 09 Nov 2021 15:47:01 +0000
Ready: False
Restart Count: 5
Liveness: http-get http://:80/ delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:80/ delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h7qch (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-h7qch:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h7qch
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m31s default-scheduler Successfully assigned default/bad-frontend-5p4k6 to controlplane
Normal Pulling 3m21s kubelet, controlplane Pulling image "katacoda/docker-http-server:unhealthy"
Normal Pulled 3m11s kubelet, controlplane Successfully pulled image "katacoda/docker-http-server:unhealthy"
Normal Killing 2m19s (x2 over 2m49s) kubelet, controlplane Container bad-frontend failed liveness probe, will be restarted
Normal Created 2m18s (x3 over 3m11s) kubelet, controlplane Created container bad-frontend
Normal Pulled 2m18s (x2 over 2m48s) kubelet, controlplane Container image "katacoda/docker-http-server:unhealthy" already present on machine
Normal Started 2m16s (x3 over 3m10s) kubelet, controlplane Started container bad-frontend
Warning Unhealthy 2m5s (x5 over 3m5s) kubelet, controlplane Readiness probe failed: HTTP probe failed with statuscode: 500
Warning Unhealthy 119s (x8 over 3m9s) kubelet, controlplane Liveness probe failed: HTTP probe failed with statuscode: 500
我們的第二個 Pod,frontend,在啟動時傳回 OK 狀态。
controlplane $ kubectl get pods --selector="name=frontend"
NAME READY STATUS RESTARTS AGE
frontend-d29h8 1/1 Running 0 4m3s
4.3 Liveness Probe
由于我們的第二個 Pod 目前處于健康狀态,我們可以模拟發生的故障。
目前,應該沒有發生崩潰。
controlplane $ kubectl get pods --selector="name=frontend"
NAME READY STATUS RESTARTS AGE
frontend-d29h8 1/1 Running 0 4m35s
崩潰服務
HTTP 伺服器有一個額外的端點,這将導緻它傳回 500 個錯誤。使用kubectl exec可以調用端點。
controlplane $ pod=$(kubectl get pods --selector="name=frontend" --output=jsonpath={.items..metadata.name})
controlplane $ kubectl exec $pod -- /usr/bin/curl -s localhost/unhealthy
controlplane $ kubectl get pods --selector="name=frontend"
NAME READY STATUS RESTARTS AGE
frontend-d29h8 1/1 Running 1 5m56s