关于kubernetes:CKS考试总结

4次阅读

共计 11608 个字符,预计需要花费 30 分钟才能阅读完成。

背景

cks 考试资格是去年流动时候跟 cka 一起买的 1200 左右大洋吧 … 考了两次,第一次 57 分。我考!第二次 62 分,居然还是没有过来 …. 可能冥冥之中本人有所感应,往年流动的时候购买了一次机会备用的 …….. 好歹第三次算是过了 86 分还好 ……
总结一下 15 个题吧!
具体可参考昕光 xg 大佬的博客 CKS 认证 –CKS 2021 最新真题考试教训分享 – 练习题 04。程序记不太清了,就依照昕光 xg 大佬列的题目记录一下解题思路吧!墙裂举荐大佬的博客。满满的都是干货!

1. RuntimeClass gVisor

依据题目内容创立一个 RuntimeClass,而后批改对立 namespace 下的 pod
参照:官网文档 https://kubernetes.io/docs/concepts/containers/runtime-class/#usage%3Cbr%3E

vim /home/cloud_user/sandbox.yml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: sandbox
handler: runsc
kubectl apply -f /home/xxx/sandbox.yml

批改 xxxx namespace 下 pod 的runtimeClassName: sandbox

kubectl  -n xxx edit deployments.apps work01 # runtimeClassName: sandbox
kubectl  -n xxx edit deployments.apps work02
kubectl  -n xxx edit deployments.apps work03


根本就是这样的 批改 3 个 deployments。减少 runtimeClassName: sandbox!期待 pod 重建实现!
如何确定批改胜利呢?
kubectl -n xxx exec xxx — dmesg

or kubectl get deployments xxx -n xxx -o yaml|grep runtime 就能够吧?

2. NetworkPolicy, 限度指定 pod、ns 拜访指定 labels 的一组 pod

这个题貌似始终是没有变的,网上也看了好多的解题办法 然而貌似都是有问题的
在 development 命名空间内,创立名为 pod-access 的 NetworkPolicy 作用于名为 products-service 的 Pod,只容许命名空间为 test 的 Pod 或者在任何命名空间内有 environment:staging 标签的 Pod 来拜访。
当然了 namespace 和 podname networkpolicy 的名字会变
重要的是确定pod namespace 的 labels

kubectl get pods -n development --show-labels
kubectl get ns --show-labels

cat network-policy.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: pod-access
  namespace: development
spec:
  podSelector:
    matchLabels:
       environment: staging
  policyTypes:
  - Ingress
  ingress:
  - from: #命名空间有 name: testing 标签的 Pod
    - namespaceSelector:
        matchLabels:
          name: testing
  - from:  #所有命名空间有 environment: staging 标签的 Pod
    - namespaceSelector:
        matchLabels:
      podSelector:
        matchLabels:
           environment: staging
kubectl apply -f network-policy.yaml

官网文档参考:https://kubernetes.io/docs/concepts/services-networking/network-policies/

3. NetworkPolicy,dany all Ingress Egress

考了两次,这个题会变 然而无非是创立名为 denynetwork 的 NetworkPolicy,回绝 development 命名空间内所有 Ingress 流量 or Egress or Ingress and Egress流量
参照:https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-and-all-egress-traffic
留神审题看题目中的要求!

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: denynetwork 
  namespace:development
spec:
  podSelector: {}
  policyTypes:
  - Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: denynetwork 
  namespace:development
spec:
  podSelector: {}
  policyTypes:
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: denynetwork 
  namespace:development
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

4. trivy 检测镜像

kubectl -n development get pods
kubectl -n development get pods --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"
NAME       IMAGE
work1      busybox:1.33.1
work2      nginx:1.14.2
work3      amazonlinux:2
work4      amazonlinux:1
work5      centos:7
trivy image -s HIGH,CRITICAL busybox:1.33.1
trivy image -s HIGH,CRITICAL nginx:1.14.2 #HIGH and CRITICAL
trivy image -s HIGH,CRITICAL amazonlinux:2
trivy image -s HIGH,CRITICAL amazonlinux:1
trivy image -s HIGH,CRITICAL centos:7 #HIGH and CRITICAL

我考试题目中是有两个镜像 centos 的镜像不符合要求把对应 pod 删除即可
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/#%E6%A0%BC%E5%BC%8F%E5%8C%96%E8%BE%93%E5%87%BA

5. kube-bench,修复不平安项

这个题目也比拟稳就是依据 bue-bench 对 master 节点进行修复,根本就是老一套。依据题目中提醒根本没有什么问题!

kube-apiserver

vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --authorization-mode=Node,RBAC

kubelet

vim /var/lib/kubelet/config.yaml
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
authorization:
  mode: Webhook
protectKernelDefaults: true

systemctl restart kubelet.service
systemctl status kubelet.service

etcd

mv /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/
vim /etc/kubernetes/etcd.yaml
- --client-cert-auth=true

这个题也很刺激。我进入 master 节点发现 kuberentes 环境居然没有启动 …what. 我批改了配置文件后还是没有起来 … 两头我的 vpn 断网了。而后查看的时候环境居然好了。莫名其妙的 ……

6. clusterrole

这里考了两次也都会轻微的扭转。然而无非是以下三个步骤

  1. 批改 namespace 下 role 的权限只容许对某一类对象做 list list 的操作
  2. 新建一个 serviceaccount
  3. 创立名为 role- 2 的 role,并且通过 rolebinding 绑定 sa-dev-1,只容许对 persistentvolumeclaims 做 update 操作。

    # kubectl -n db get role
    NAME       ROLE          AGE
    sa-dev-1   Role/role-1   7d16h
    
    # kubectl -n db edit role role-1
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: role-1
      namespace: db
      selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/db/roles/role-1
  4. apiGroups:

    • “”
      resources:
    • endpoints #只容许对 endpoints 资源 list
      verbs:
    • list

    kubectl -n db create role role-2 –resource=persistentvolumeclaims –verb=update
    kubectl create rolebinding role-2-binding –role=role-2 –serviceaccount=db:service-account-web -n db

    ## 7. serviceAccount
    在 qa 命名空间内创立 ServiceAccount frontend-sa,不容许拜访任何 secrets。创立名为 frontend-sa 的 Pod 应用该 ServiceAccount。并删除 qa 命名空间下没有应用的 sa
    参照:[https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)

    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: frontend-sa
    namespace:qa
    automountServiceAccountToken: false

    apiVersion: v1
    kind: Pod
    metadata:
    name: my-pod
    namespace: qa
    spec:
    serviceAccountName: frontend-sa
    automountServiceAccountToken: false

    kubectl get sa -n qa 应该是有三个 sa  frontend-sa  default  还有另外一个  失常应该是保留 frontend-sa  删除另外两个?然而 default 有必要删除吗?哈哈哈哈这个中央有点彷徨!我貌似
    ## 8. Dockerfile 和 Pod yaml 检测
    这个中央也有点郁闷啊。不是每个文件中让批改两处吗?Dockerfile 中我是不是要批改三处呢?题目中注明了 根底镜像是 ubuntu:16.04 然而给的文件中是 latest 应该是要批改的吧?

    Dockerfile:
    去掉两处 USER root
    设置根底镜像为 ubuntu:16.04
    Pod yaml:
    正文掉 privileged 那一行的相干配置

    [https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#from](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#from)
    ## 9. secret
    istio-system 命名空间中有一个名称为 db1-test 的 secret,依照要求实现如下内容:
  5. 存储 username 字段到 /home/candidate/user.txt 文件 , password 字段到 /home/candidate/old_pass.txt 文件。用户须要本人创立文件。
  6. 创立 secret from user password。
  7. 将 secret 挂载到 pod 中。

在 istio-system 命名空间创立名称为 db2-test 的 secret,其蕴含如下内容:

pod name secret-pod
namespace istio-system
container name dev-container
image nginx  ### 也可能是 httpd 我考的几次都碰到过
volume name secret-volume
mount path /etc/secret

1. 将 secrets base 64 写入文件

很多解题都用了 kubectl jsonpath 的形式集体比拟笨就用了传统的形式:

kubectl get secrets -n istio-system db1-test -o yaml
echo "" |base64 -d > /home/candidate/old_pass.txt
echo "" |base64 -d > /home/candidate/user.txt 

留神程序 查看 secrets 的时候貌似 password 都在下面,集体习惯总是容易先搞 username。还是容易搞混的

2. 生成 secret

参照:https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/

kubectl create secret generic db2-test -n istio-system \
--from-literal=username=production-instance \
--from-literal=password=KvLftKgs4aHV

3 . secret 挂载到 pod 中

apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
  namespace: istio-system
spec:
  containers:
  - name: dev-container
    image: nginx  ### 我的是 httpd
    volumeMounts:
- name: secret-volume
mountPath: "/etc/secret"
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: db2-test

10. PodSecurityPolicy

根本就是参照官网:

  1. 创立名为 restrict-policy 的 PodSecurityPolicy,阻止创立 privileged Pod
  2. 创立名为 restrict-access-role 的 ClusterRole 容许应用新创建的名为 restrict-policy 的 PodSecurityPolicy。
  3. 在 staging 命名空间中创立名为 psp-denial-sa 的 serviceAccount。
  4. 创立名为 deny-access-bind 的 clusterRoleBinding,绑定刚刚创立的 serviceAccount 和 ClusterRole。

    vim  restrict-policy.yml
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: restrict-policy
    spec:
      privileged: false  ### 肯定要记得这里 false 吧?runAsUser:
     rule: "RunAsAny"
      fsGroup:
     rule: "RunAsAny"
      seLinux:
     rule: "RunAsAny"
      supplementalGroups:
     rule: "RunAsAny"
    kubectl apply -f restrict-policy.yml
    kubectl create clusterrole restrict-access-role --verb=use --resource=psp --resource-name=restrict-policy
    kubectl create sa psp-denial-sa -n staging
    kubectl create clusterrolebinding deny-access-bind --clusterrole=restrict-access-role --serviceaccount=staging:psp-denial-sa

    注: 参考 https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems

    11. Container(容器)平安 , 删除蕴含 Volumes 或 privileged 的 pod

    查看 production namespace 下所有 pod 是否有特权 Privileged 或者挂载 volume 的 pod

    kubectl get pods NAME -n  production -o jsonpath={.spec.volumes} | jq
    kubectl get pods NAME -o yaml -n  production | grep "privi.*: true"

    而后删除 Privileged 或者挂载 volume 的 Pod。应该是有 3 个 pod 要删除两个?

    12. audit– 日志审计

    日志审计这个中央很刺激。不晓得为什么我改了前两次都没有起来 …
    当初回忆一下我默认的人家写好的都删了这应该是不对的:
    basic policy is provided at /etc/kubernetes/logpolicy’sample-policy.yaml . It onlyspecifies what not to log—- 指定不记录到日志中的内容

  5. namespaces changes at RequestResponse level
  6. the request body of persistentwolumes changes inthe namespace front-apps
  7. ConfigMap and Secret changes in all namespaces atthe Metadata level

    cat /etc/kubernetes/logpolicy/sample-policy.yaml

    apiVersion: audit.k8s.io/v1 # This is required.
    kind: Policy
    omitStages:
      - "RequestReceived"
    rules:
      # 保留指定不记录到日志中的内容(原配置文件内容)- level: RequestResponse
     resources:
     - group: ""resources: ["namespaces"]
      - level: Request
     resources:
     - group: ""resources: ["persistentvolumes"]
     namespaces: ["front-apps"]
    
      - level: Metadata
     resources:
     - group: "" # core API group
       resources: ["secrets", "configmaps"]
      - level: Metadata
     omitStages:
       - "RequestReceived"

    kube-apiserver.yaml

    配置文件门路,log 门路 最大保留天数,保留的审计日志文件的最大数量

  8. –audit-policy-file=/etc/kubernetes/audit-policy.yaml
  9. –audit-log-path=/var/log/kubernetes/audit.log
  10. –audit-log-maxage=10
  11. –audit-log-maxbackup=1

    ### 重启服务并验证

    systemctl restart kubelet
    kubectl apply -f xxx.yaml
    tail -f /var/log/kubernetes/audit.log

    注:批改之前先做好备份!谨记参照官网:[https://kubernetes.io/docs/tasks/debug-application-cluster/audit/](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)。文件的挂载环境中已配置好了不必批改增加的。## 13. image policy
    参照:[https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
    ### admission-control.conf or|dmission-control.yaml

    vim /etc/kubernetes/admission-control/admission-control.conf
    apiVersion: apiserver.config.k8s.io/v1
    kind: AdmissionConfiguration
    plugins:

  12. name: ImagePolicyWebhook
    path: imagepolicy.conf

    ### imagepolicy.conf | imagepolicy.json

    vim /etc/kubernetes/admission-control/imagepolicy.conf
    {
    “imagePolicy”: {
    “kubeConfigFile”: “/etc/kubernetes/admission-control/imagepolicy_backend.kubeconfig”,
    “allowTTL”: 50,
    “denyTTL”: 50,
    “retryBackoff”: 500,
    “defaultAllow”: false
    }
    }
    注:只批改了 defaultAllow true 为 false!

    ### imagepolicy_backend.kubeconfig

    vim /etc/kubernetes/admission-control/imagepolicy_backend.kubeconfig
    apiVersion: v1
    kind: Config
    clusters:

  13. name: trivy-k8s-webhook
    cluster:
    certificate-authority: /etc/kubernetes/admission-control/imagepolicywebhook-ca.crt
    server: https://acg.trivy.k8s.webhook…
    contexts:
  14. name: trivy-k8s-webhook
    context:
    cluster: trivy-k8s-webhook
    user: api-server
    current-context: trivy-k8s-webhook
    preferences: {}
    users:
  15. name: api-server
    user:
    client-certificate: /etc/kubernetes/admission-control/api-server-client.crt
    client-key: /etc/kubernetes/admission-control/api-server-client.key

    只在 server 中减少了相干配置!

    ### kube-apiserver

    vim /etc/kubernetes/manifests/kube-apiserver.yaml

  16. –admission-control-config-file=/etc/kubernetes/admission-control/admission-control.conf
  17. –enable-admission-plugins=NodeRestriction,ImagePolicyWebhook

    ### 验证

    systemctl restart kubelet
    kubectl apply -f /root/xxx/vulnerable-manifest.yaml
    tail -n 10 /var/log/imagepolicy/roadrunner.log

    ## 14. api-server 参数调整
    这个题没有读懂。没有搞明确  只看懂了 apiserver 中批改了:
  18. –enable-admission-plugins=AlwaysAdmit
    批改为:
  19. –enable-admission-plugins=NodeRestriction

    而后依据题目提醒还删除了了一个匿名的 clusterrole.... 接下来就不晓得要怎么搞了。失常就是这个题得分应该是没有的
    ## 15. AppArmor
    注:参照[https://kubernetes.io/docs/tutorials/clusters/apparmor/](https://kubernetes.io/docs/tutorials/clusters/apparmor/)
    ### ssh 到 work 节点

    cat /etc/apparmor.d/nginx_apparmor

    include <tunables/global>

    profile nginx-profile-3 flags=(attach_disconnected) {
    #include <abstractions/base>
    file,
    # Deny all file writes.
    deny /** w,
    }
    sudo apparmor_status | grep nginx

    sudo -qa /etc/apparmor.d/nginx_apparmor

    sudo apparmor_status | grep nginx
    nginx-profile-3

    ### exit work 节点回到跳板机:

    vim https://github.com/lmtbelmont…
    apiVersion: v1
    kind: Pod
    metadata:
    name: writedeny
    namespace: dev
    annotations:
    container.apparmor.security.beta.kubernetes.io/busybox: localhost/nginx-profile-3
    spec:
    containers:

    • name: busybox

    image: busybox:1.33.1

    kubecl apply -f ~/nginx-deploy.yml

    # 其余:相干材料参考:
  20. 昕光 xg 大佬的博客 https://blog.csdn.net/u011127242/category_10823035.html
  21. https://github.com/ggnanasekaran77/cks-exam-tips
  22. https://github.com/jayendrapatil/kubernetes-exercises/tree/main/topics
  23. https://github.com/lmtbelmonte/cks-cert
  24. https://github.com/PatrickPan93/cks-relative
  25. https://github.com/moabukar/CKS-Exercises-Certified-Kubernetes-Security-Specialist

以上几个资源刷一遍 cks 必过!。当然了还有官网文档:https://kubernetes.io/docs/concepts/

正文完
 0