天天看點

k8s mysql 彈性_k8s 彈性伸縮,基于prometheus自定義名額

簡介

上一批文章寫了,基于CPU名額的彈性伸縮,資源名額隻包含CPU、記憶體,一般來說也夠了。但如果想根據自定義名額:如請求qps/5xx錯誤數來實作HPA,就需要使用自定義名額了,目前比較成熟的實作是 Prometheus Custom Metrics。自定義名額由Prometheus來提供,再利用k8s-prometheus-adpater聚合到apiserver,實作和核心名額(metric-server)同樣的效果。

k8s mysql 彈性_k8s 彈性伸縮,基于prometheus自定義名額

下面我們就來示範基于prometheus監控自定義名額實作k8s pod基于qps的彈性伸縮

這裡已經部署好prometheus環境

準備擴容應用

部署一個應用,且應用需要允許被prometheus采集資料

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: metrics-app

name: metrics-app

spec:

replicas: 3

selector:

matchLabels:

app: metrics-app

template:

metadata:

labels:

app: metrics-app

annotations:

prometheus.io/scrape: "true"# 設定允許被prometheus采集

prometheus.io/port: "80"# prometheus 采集的端口

prometheus.io/path: "/metrics"# prometheus 采集的路徑

spec:

containers:

- image: metrics-app

name: metrics-app

ports:

- name: web

containerPort: 80

resources:

requests:

cpu: 200m

memory: 256Mi

readinessProbe:

httpGet:

path: /

port: 80

initialDelaySeconds: 3

periodSeconds: 5

livenessProbe:

httpGet:

path: /

port: 80

initialDelaySeconds: 3

periodSeconds: 5

---

apiVersion: v1

kind: Service

metadata:

name: metrics-app

labels:

app: metrics-app

spec:

ports:

- name: web

port: 80

targetPort: 80

selector:

app: metrics-app

應用部署之後,通過prometheus web端驗證是否已經自動發現

k8s mysql 彈性_k8s 彈性伸縮,基于prometheus自定義名額

驗證名額能否正常采集

k8s mysql 彈性_k8s 彈性伸縮,基于prometheus自定義名額

這樣我們部署的擴容應用就準備完成了,接下來的内容就是HPA如何擷取其qps進行擴容的配置了

部署 Custom Metrics Adapter

prometheus采集到的metrics并不能直接給k8s用,因為兩者資料格式不相容,還需要另外一個元件(k8s-prometheus-adpater),将prometheus的metrics 資料格式轉換成k8s API接口能識别的格式,轉換以後,因為是自定義API,是以還需要用Kubernetes aggregator在主APIServer中注冊,以便直接通過/apis/來通路。

prometheus-adapter GitHub位址:https://github.com/DirectXMan12/k8s-prometheus-adapter

該 PrometheusAdapter 有一個穩定的Helm Charts,我們直接使用,這裡使用helm 3.0版本,使用微軟雲的鏡像

先準備下helm環境(如已有可忽略):

wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz

tar zxvf helm-v3.0.0-linux-amd64.tar.gz

mv linux-amd64/helm /usr/bin/

helm repo add stable http://mirror.azure.cn/kubernetes/charts

helm repo update

helm repo list

部署prometheus-adapter,指定prometheus位址:

# helm install prometheus-adapter stable/prometheus-adapter --namespace kube-system --set prometheus.url=http://prometheus.kube-system,prometheus.port=9090

# helm list -n kube-system

驗證部署成功

# kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

prometheus-adapter-86574f7ff4-t9px4 1/1 Running 0 65s

確定擴充卡注冊到APIServer:

# kubectl get apiservices |grep custom

# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1"

建立HPA政策

我們配置的擴容規則政策為每秒QPS超過0.8個就進行擴容

[[email protected] ~]# cat app-hpa-v2.yml

apiVersion: autoscaling/v2beta2

kind: HorizontalPodAutoscaler

metadata:

name: metrics-app-hpa

namespace: default

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: metrics-app

minReplicas: 1

maxReplicas: 10

metrics:

- type: Pods

pods:

metric:

name: http_requests_per_second

target:

type: AverageValue

averageValue: 800m# 800m 即0.8個/秒,如果是閥值設定為每秒10個,這裡的值就應該填寫10000m

[[email protected] ~]# kubectl apply -f app-hpa-v2.yml

配置prometheus自定義名額

當建立好HPA還沒資料,因為擴充卡還不知道你要什麼名額(http_requests_per_second),HPA也就擷取不到Pod提供名額,接下來我們就要解決監控值沒有正常擷取的問題,即配置自定義名額

[[email protected] ~]# kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

metrics-app-hpa Deployment/metrics-app /800m 1 10 3 36s

ConfigMap在default名稱空間中編輯prometheus-adapter ,并seriesQuery在該rules: 部分的頂部添加一個新的配置:(擷取該服務的所有pod 2分鐘之内的http_requests監控值并計算平均值,然後相加對外提供資料)

[[email protected] ~]# kubectl edit cm prometheus-adapter -n kube-system

apiVersion: v1

data:

config.yaml: |

rules:

- seriesQuery: 'http_requests_total{kubernetes_namespace!="",kubernetes_pod_name!=""}'

resources:

overrides:

kubernetes_namespace: {resource: "namespace"}

kubernetes_pod_name: {resource: "pod"}

name:

matches: "^(.*)_total"

as: "${1}_per_second"

metricsQuery: 'sum(rate(<<.series>>{<<.labelmatchers>>}[2m])) by (<<.groupby>>)'

……

配置好之後,因為adapter pod不支援配置動态加載,是以我們修改了配置,需要删除一下pod,讓他重新加載一個新的生效配置

[[email protected] ~]# kubectl delete pod prometheus-adapter-86574f7ff4-t9px4 -n kube-system

删除pod重新生成配置之後,大約一兩分鐘在觀察hpa的值就正常了

[[email protected] ~]# kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

metrics-app-hpa Deployment/metrics-app 416m/800m 1 10 2 17m

驗證基于自定義名額的擴容

接下來通過壓測驗證擴容縮容

[[email protected] ~]# kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.0.0.1 443/TCP 84d

metrics-app ClusterIP 10.0.0.80 80/TCP 3d2h

[roo[email protected] ~]# ab -n 100000 -c 100 http://10.0.0.80/metrics

壓測過程中觀察hpa和pod狀态發現pod已經自動擴容到了10個pod

[[email protected] ~]# kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

metrics-app-hpa Deployment/metrics-app 289950m/800m 1 10 10 38m

[[email protected] ~]# kubectl get pods

NAME READY STATUS RESTARTS AGE

metrics-app-7674cfb699-2gznv 1/1 Running 0 40s

metrics-app-7674cfb699-2hk6r 1/1 Running 0 40s

metrics-app-7674cfb699-5926q 1/1 Running 0 40s

metrics-app-7674cfb699-5qgg2 1/1 Running 0 2d2h

metrics-app-7674cfb699-9zkk4 1/1 Running 0 25s

metrics-app-7674cfb699-dx8cj 1/1 Running 0 56s

metrics-app-7674cfb699-fmgpp 1/1 Running 0 56s

metrics-app-7674cfb699-k9thm 1/1 Running 0 25s

metrics-app-7674cfb699-wzxhk 1/1 Running 0 2d2h

metrics-app-7674cfb699-zdbtg 1/1 Running 0 40s

停止壓測一段時間之後,pod數量就會根據HPA政策自動縮容成1個,說明我們的配置是成功的

喜歡 (0)or分享 (0)