您的位置 首页 k8s

 k8s  metric-server 安装

k8s  metric-server 安装

Kubernetes Metric Server是一种Kubernetes集群组件,它收集Kubernetes集群中各种对象的监视数据,包括Pod、Node和容器。Metric Server将这些监视数据聚合并存储在可查询的格式中,以供其他组件和用户使用。它用于实时监视Kubernetes集群中的资源使用情况,例如CPU、内存和网络流量等指标。

建议安装Metric Server,以便轻松地查询集群中资源使用率的一些常用指标。它也是许多其他Kubernetes组件所需的底层依赖,例如Kubernetes Dashboard。

 

Metrics Server从kubelets收集资源指标,并通过Metrics API将它们暴露在Kubernetes apiserver中,以供HPA(Horizontal Pod Autoscaler)和VPA(Vertical Pod Autoscaler)使用。

Metrics API也可以通过kubectl top访问,从而更容易调试自动缩放管道。

参考链接:https://github.com/kubernetes-sigs/metrics-server

部署metric-server:

1.下载资源清单

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

2.上传镜像到harbor

metrics-server: https://url69.ctfile.com/d/253469-56661059-70861c?p=2206 (访问密码: 2206)

3.部署资源清单

cat high-availability-1.21+.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: class
        operator: Exists
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                k8s-app: metrics-server
            namespaces:
            - kube-system
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        # image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
        # image: registry.cn-hangzhou.aliyuncs.com/baimei-k8s/metrics-server:v0.6.3
        # image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.3
        image: harbor.baimei.com/add-ons/metrics-server:v0.6.3
        # imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  minAvailable: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

部署

kubectl apply -f high-availability-1.21+.yaml

 

接下来我们写一个案例:

横向扩容 : 增加服务器节点数量

纵向扩容 : 增加本身的配置,如内存,CPU

我们来演示一下 横向扩容HPA(Horizontal Pod Autoscaler) 的案例:

HPA(Horizontal Pod Autoscaler)案例:

1.编写资源清单

cat 01-deploy-stress.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: baimei-linux86-stress
spec:
  replicas: 1
  selector:
    matchExpressions:
    - key: apps
      operator: Exists
  template:
    metadata:
      labels:
        apps: stress
    spec:
      containers:
      - name: web
        image: baimei2020/baimei-linux-tools:v0.1
        command:
        - tail
        - -f
        - /etc/hosts
        resources:
          requests:
             cpu: 500m
             memory: 200M
          limits:
             cpu: 1
             memory: 500M

 

2.创建hpa规则

2.1 声明式创建hpa,推荐使用

cat 02-hpa.yaml

# 指定Api的版本号
apiVersion: autoscaling/v2
# 指定资源类型
kind: HorizontalPodAutoscaler
# 指定hpa源数据信息
metadata:
  # 指定名称
  name: baimei-linux86-stress-hpa
  # 指定名称空间
  namespace: default
# 用户的期望状态
spec:
  # 指定最大的Pod副本数量
  maxReplicas: 5
  # 指定监控指标
  metrics:
    # 指定资源限制
  - resource:
      # 指定资源限制的名称
      name: cpu
      # 指定限制的阈值
      target:
        averageUtilization: 80
        type: Utilization
    type: Resource
  # 指定最小的Pod副本数量
  minReplicas: 2
  # 当前的hpa规则应用在哪个资源
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: baimei-linux86-stress

直接部署

kubectl apply -f 01-deploy-stress.yaml 
kubectl apply -f 02-hpa.yaml 

用dashboad 查看:

 

提示要登录docker

docker login -u admin -p 1 harbor.baimei.com

 

压力测试:

kubectl exec baimei-linux86-stress-6749ccfdd8-4r9zl -- stress -c 4 --verbose --timeout 10m

 

kubectl  get hpa

 

2.2 响应式创建hpa规则,测试使用

kubectl autoscale deployment baimei-linux86-stress --min=2 --max=10 --cpu-percent=90

 

欢迎来撩 : 汇总all

白眉大叔

关于白眉大叔linux云计算: 白眉大叔

热门文章