1.1 基于角色的访问控制-RBAC

中文解释:
创立一个名为deployment-clusterrole的clusterrole,该clusterrole只容许创立Deployment、Daemonset、Statefulset的create操作
在名字为app-team1的namespace下创立一个名为cicd-token的serviceAccount,并且将上一步创立clusterrole的权限绑定到该serviceAccount
解题:

可参考:https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/

https://kubernetes.io/zh/docs...

kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets

kubectl -n app-team1 create serviceaccount cicd-token

kubectl -n app-team1 create rolebinding cicd-token-binding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token

1.2 节点保护-指定node节点不可用

参考:https://kubernetes.io/docs/re...

中文解释:

将ek8s-node-1节点设置为不可用,而后从新调度该节点上的所有Pod

解题:
$ kubectl config use-context ek8s
$ kubectl cordon ek8s-node-1 #设置节点是不可调度状态
$ kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force

1.3 K8s版本升级

翻译:
现有的 Kubernetes 集权正在运行的版本是 1.18.8,仅将主节点上的所有 kubernetes 控制面板和组件降级到版本 1.19.0 另外,在主节点上降级 kubelet 和 kubectl

解题:

参考:https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

设置为保护状态

$ kubectl config use-context mk8s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 11d v1.19.0
k8s-node01 Ready <none> 8d v1.18.8
k8s-node02 Ready <none> 11d v1.18.8

$ kubectl cordon k8s-master

驱赶Pod

$ kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets –force

依照题目提醒ssh到一个master节点

$ ssh k8s-master
$ apt update
$ apt-cache policy kubeadm | grep 1.19.0 #查看反对哪个版本
$ apt-get install kubeadm=1.19.0-00

验证降级打算

$ kubeadm upgrade plan

看到如下信息,可降级到指定版本

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.19.0

开始降级Master节点

$ kubeadm upgrade apply v1.19.0 --etcd-upgrade=false
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

降级kubectl和kubelet

$ apt-get install -y kubelet=1.19.0-00 kubectl=1.19.0-00
$ systemctl daemon-reload
$ systemctl restart kubelet
$ kubectl uncordon k8s-master
node/k8s-master uncordoned
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 11d v1.19.0
k8s-node01 Ready <none> 8d v1.18.8
k8s-node02 Ready <none> 11d v1.18.8

1.20.0降级到1.20.1
$ apt-cache policy kubeadm | grep 1.20.1
$ apt-get install kubeadm=1.20.1-00

$ kubeadm upgrade plan

看到如下信息,可降级到指定版本

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.20.1

kubeadm upgrade apply v1.20.1 --etcd-upgrade=false

降级kubectl和kubelet

$ apt-get install -y kubelet=1.20.1-00 kubectl=1.20.1-00
$ systemctl daemon-reload
$ systemctl restart kubelet

$ kubectl uncordon k8s-master
node/k8s-master uncordoned
$ kubectl get node

1.4 Etcd数据库备份复原

中文解释:

针对etcd实例https://127.0.0.1:2379创立一个快照,保留到/srv/data/etcd-snapshot.db。在创立快照的过程中,如果卡住了,就键入ctrl+c终止,而后重试。而后复原一个曾经存在的快照:    /var/lib/backup/etcd-snapshot-previous.db执行etcdctl命令的证书寄存在:ca证书:/opt/KUIN00601/ca.crt

客户端证书:/opt/KUIN00601/etcd-client.crt
客户端密钥:/opt/KUIN00601/etcd-client.key

解题:

可参考:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/#备份

$ export ETCDCTL_API=3
$ etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /srv/data/etcd-snapshot.db

还原
$ mkdir /opt/backup/ -p
$ cd /etc/kubernetes/manifests && mv kube-* /opt/backup
$ export ETCDCTL_API=3
$etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db --data-dir=/var/lib/etcd-restore

$ vim etcd.yaml

将volume配置的path: /var/lib/etcd改成/var/lib/etcd-restore

volumes:

  • hostPath:
    path: /etc/kubernetes/pki/etcd
    type: DirectoryOrCreate
    name: etcd-certs
  • hostPath:
    path: /var/lib/etcd-restore

还原k8s组件

$ mv /opt/backup/* /etc/kubernetes/manifests
$ systemctl restart kubelet

1.5 网络策略NetworkPolicy

中文解释:

创立一个名字为all-port-from-namespace的NetworkPolicy,这个NetworkPolicy容许internal命名空间下的Pod拜访该命名空间下的9000端口。并且不容许不是internal命令空间的下的Pod拜访不容许拜访没有监听9000端口的Pod。

解题:

参考:https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-port-from-namespace
namespace: internal
spec:
ingress:

  • from:

    • podSelector: {}
      ports:
    • port: 9000
      protocol: TCP
      podSelector: {}
      policyTypes:
  • Ingress

1.6 四层负载平衡service

中文解释:

重新配置一个曾经存在的deployment front-end,在名字为nginx的容器外面增加一个端口配置,名字为http,裸露端口号为80,而后创立一个service,名字为front-end-svc,裸露该deployment的http端口,并且service的类型为NodePort。

解题:

本题能够参考:https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/

$ kubectl edit deploy front-end

增加如下配置,次要是在name为nginx的容器下

增加service:
$
kubectl expose deploy front-end --name=front-end-svc --port=80 --target-port=http --type=NodePort

1.7 七层负载平衡Ingress

中文解释:

在ing-internal 命名空间下创立一个ingress,名字为pong,代理的service hi,端口为5678,配置门路/hi。验证:拜访curl -kL <INTERNAL_IP>/hi会返回hi

解题:

本地可参考:https://kubernetes.io/zh/docs/concepts/services-networking/ingress/

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
spec:
rules:

  • http:
    paths:

    • path: /hi
      pathType: Prefix
      backend:
      service:

      name: hiport:  number: 5678

1.8 Deployment治理pod扩缩容

中文解释:

扩容名字为loadbalancer的deployment的正本数为6

解题:
$ kubectl config use-context k8s

$ kubectl scale --replicas=6 deployment loadbalancer

或者用$ kubectl edit deployment loadbalancer 间接在线扩容也能够

1.9 pod指定节点部署

中文解释:

创立一个Pod,名字为nginx-kusc00401,镜像地址是nginx,调度到具备disk=spinning标签的节点上,该题能够参考链接:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/

参考:

https://kubernetes.io/zh/docs/tasks/configure-pod-container/assign-pods-nodes/

解题:
$ vim pod-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:

role: nginx-kusc00401

spec:
nodeSelector:

disk: spinning

containers:

- name: nginx  image: nginx

$ kubectl create -f pod-ns.yaml

1.10 查看Node节点的衰弱状态

中文解释:

查看集群中有多少节点为Ready状态,并且去除蕴含NoSchedule污点的节点。之后将数字写到/opt/KUSC00402/kusc00402.txt

解题:
$ kubectl config use-context k8s
$ kubectl get node | grep -i ready # 记录总数为A
$ kubectl describe node | grep Taint | grep NoSchedule # 记录总数为B

将A减B的值x导入到/opt/KUSC00402/kusc00402.txt

$ echo x >> /opt/KUSC00402/kusc00402.txt

grep -i: 疏忽字符大小写的差异。

1.11 一个Pod封装多个容器

中文解释:

创立一个Pod,名字为kucc1,这个Pod可能蕴含1-4容器,该题为四个:nginx+redis+memcached+consul

解题:
apiVersion: v1
kind: Pod
metadata:
name: kucc1
spec:
containers:

  • image: nginx
    name: nginx
  • image: redis
    name: redis
  • image: memchached
    name: memcached
  • image: consul
    name: consul

1.12 长久化存储卷PersistentVolume

中文解释:

创立一个pv,名字为app-config,大小为2Gi,拜访权限为ReadWriteMany。Volume的类型为hostPath,门路为/srv/app-config

解题:

参考:https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:

type: local

spec:
storageClassName: manual #能够写也能够不写
capacity:

storage: 2Gi

accessModes:

- ReadWriteMany

hostPath:

path: "/srv/app-config"

1.13 PersistentVolumeClaim

中文文档:

创立一个名字为pv-volume的pvc,指定storageClass为csi-hostpath-sc,大小为10Mi

而后创立一个Pod,名字为web-server,镜像为nginx,并且挂载该PVC至/usr/share/nginx/html,挂载的权限为ReadWriteOnce。之后通过kubectl edit或者kubectl path将pvc改成70Mi,并且记录批改记录。
解题:

参考:https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/创立PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 10Mi
    storageClassName: csi-hostpath-sc

创立Pod:
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:

- name: nginx  image: nginx  volumeMounts:  - mountPath: "/usr/share/nginx/html"    name: pv-volume

volumes:

- name: pv-volume  persistentVolumeClaim:    claimName: pv-volume

扩容:

形式一Patch命令:    kubectl patch pvc pv-volume  -p '{"spec":{"resources":{"requests":{"storage": "70Mi"}}}}' --record    形式二edit:

kubectl edit pvc pv-volume

1.14 监控Pod日志

中文解释:

监控名为foobar的Pod的日志,并过滤出具备unable-access-website 信息的行,而后将写入到 /opt/KUTR00101/foobar

解题:
$ kubectl config use-context k8s
$ kubectl logs foobar | grep unable-access-website > /opt/KUTR00101/foobar

1.15 Sidecar代理

中文解释:

增加一个名为busybox且镜像为busybox的sidecar到一个曾经存在的名为legacy-app的Pod上,这个sidecar的启动命令为/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log'。并且这个sidecar和原有的镜像挂载一个名为logs的volume,挂载的目录为/var/log/

解题:

本题答案:https://kubernetes.io/zh/docs/concepts/cluster-administration/logging/首先将legacy-app的Pod的yaml导出,大抵如下:

$ kubectl get po legacy-app -oyaml > c-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: legacy-app
spec:
containers:

  • name: count
    image: busybox
    args:

    • /bin/sh
    • -c
    • i=0;
      while true;
      do
      echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
      i=$((i+1));
      sleep 1;
      done

再此yaml中增加sidecar和volume
$ vim c-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: legacy-app
spec:
containers:

  • name: count
    image: busybox
    args:

    • /bin/sh
    • -c
    • i=0;
      while true;
      do
      echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
      i=$((i+1));
      sleep 1;
      done
      volumeMounts:
    • name: logs
      mountPath: /var/log
  • name: busybox
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-ap.log']
    volumeMounts:

    • name: logs
      mountPath: /var/log
      volumes:
  • name: logs
    emptyDir: {}

$ kubectl delete -f c-sidecar.yaml ; kubectl create -f c-sidecar.yaml

1.16 监控Pod度量指标

中文解释:

找出具备name=cpu-user的Pod,并过滤出应用CPU最高的Pod,而后把它的名字写在曾经存在的/opt/KUTR00401/KUTR00401.txt文件里(留神他没有说指定namespace。所以须要应用-A指定所以namespace)

解题:
$ kubectl config use-context k8s
$ kubectl top po -A -l name=cpu-user
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-54d67798b7-hl8xc 7m 8Mi
kube-system coredns-54d67798b7-m4m2q 6m 8Mi

留神这里的pod名字以理论名字为准,依照CPU那一列进行抉择一个最大的Pod,另外如果CPU的数值是1 2 3这样的。是大于带m这样的,因为1颗CPU等于1000m,留神要用>>而不是>

$ echo "coredns-54d67798b7-hl8xc" >> /opt/KUTR00401/KUTR00401.txt

1.17 集群故障排查 – kubelet故障

中文解释:

一个名为wk8s-node-0的节点状态为NotReady,让其余复原至失常状态,并确认所有的更改开机主动实现

解题:
$ ssh wk8s-node-0
$ sudo -i

systemctl status kubelet

systemctl start kubelet

systemctl enable kubelet

主节点故障排查:--之前的考试题,当初考试应该没有这个题了。
https://kubernetes.io/zh/docs...