背景
cks考试资格是去年流动时候跟cka一起买的 1200左右大洋吧...考了两次 ,第一次57分。我考!第二次 62分, 居然还是没有过来....可能冥冥之中本人有所感应,往年流动的时候购买了一次机会备用的........好歹第三次算是过了86分还好......
总结一下15个题吧!
具体可参考昕光xg大佬的博客CKS认证--CKS 2021最新真题考试教训分享--练习题04。程序记不太清了,就依照昕光xg大佬列的题目记录一下解题思路吧!墙裂举荐大佬的博客。满满的都是干货!
1. RuntimeClass gVisor
依据题目内容创立一个RuntimeClass,而后批改对立namespace下的pod
参照:官网文档https://kubernetes.io/docs/concepts/containers/runtime-class/#usage%3Cbr%3E
vim /home/cloud_user/sandbox.ymlapiVersion: node.k8s.io/v1kind: RuntimeClassmetadata: name: sandboxhandler: runsckubectl apply -f /home/xxx/sandbox.yml
批改xxxx namespace下 pod的runtimeClassName: sandbox
kubectl -n xxx edit deployments.apps work01 # runtimeClassName: sandboxkubectl -n xxx edit deployments.apps work02kubectl -n xxx edit deployments.apps work03
根本就是这样的 批改3个deployments。减少runtimeClassName: sandbox!期待pod重建实现!
如何确定批改胜利呢?
kubectl -n xxx exec xxx -- dmesg
or kubectl get deployments xxx -n xxx -o yaml|grep runtime就能够吧?
2. NetworkPolicy, 限度指定pod、ns 拜访指定labels的一组pod
这个题貌似始终是没有变的,网上也看了好多的解题办法 然而貌似都是有问题的
在development命名空间内,创立名为pod-access的NetworkPolicy作用于名为products-service的Pod,只容许命名空间为test的Pod或者在任何命名空间内有environment:staging标签的Pod来拜访。
当然了namespace 和podname networkpolicy的名字会变
重要的是确定pod namespace的labels
kubectl get pods -n development --show-labelskubectl get ns --show-labels
cat network-policy.yaml
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: pod-access namespace: developmentspec: podSelector: matchLabels: environment: staging policyTypes: - Ingress ingress: - from: #命名空间有name: testing标签的Pod - namespaceSelector: matchLabels: name: testing - from: #所有命名空间有environment: staging标签的Pod - namespaceSelector: matchLabels: podSelector: matchLabels: environment: staging
kubectl apply -f network-policy.yaml
官网文档参考:https://kubernetes.io/docs/concepts/services-networking/network-policies/
3. NetworkPolicy,dany all Ingress Egress
考了两次,这个题会变 然而无非是创立名为 denynetwork 的 NetworkPolicy,回绝development命名空间内所有Ingress流量or Egress or Ingress and Egress流量
参照:https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-and-all-egress-traffic
留神审题看题目中的要求!
---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: denynetwork namespace: developmentspec: podSelector: {} policyTypes: - Ingress
---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: denynetwork namespace: developmentspec: podSelector: {} policyTypes: - Egress
---apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: denynetwork namespace: developmentspec: podSelector: {} policyTypes: - Ingress - Egress
4. trivy 检测镜像
kubectl -n development get podskubectl -n development get pods --output=custom-columns="NAME:.metadata.name,IMAGE:.spec.containers[*].image"NAME IMAGEwork1 busybox:1.33.1work2 nginx:1.14.2work3 amazonlinux:2work4 amazonlinux:1work5 centos:7trivy image -s HIGH,CRITICAL busybox:1.33.1trivy image -s HIGH,CRITICAL nginx:1.14.2 #HIGH and CRITICALtrivy image -s HIGH,CRITICAL amazonlinux:2trivy image -s HIGH,CRITICAL amazonlinux:1trivy image -s HIGH,CRITICAL centos:7 #HIGH and CRITICAL
我考试题目中是有两个镜像 centos的镜像不符合要求把对应pod删除即可
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/#%E6%A0%BC%E5%BC%8F%E5%8C%96%E8%BE%93%E5%87%BA
5. kube-bench,修复不平安项
这个题目也比拟稳就是依据bue-bench对master节点进行修复,根本就是老一套 。依据题目中提醒根本没有什么问题!
kube-apiserver
vim /etc/kubernetes/manifests/kube-apiserver.yaml- --authorization-mode=Node,RBAC
kubelet
vim /var/lib/kubelet/config.yamlauthentication: anonymous: enabled: false webhook: enabled: trueauthorization: mode: WebhookprotectKernelDefaults: truesystemctl restart kubelet.servicesystemctl status kubelet.service
etcd
mv /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/vim /etc/kubernetes/etcd.yaml- --client-cert-auth=true
这个题也很刺激 。我进入master节点发现kuberentes环境居然没有启动...what.我批改了配置文件后还是没有起来...两头我的vpn断网了。而后查看的时候环境居然好了。莫名其妙的......
6. clusterrole
这里考了两次也都会轻微的扭转。然而无非是以下三个步骤
- 批改namespace下role的权限只容许对某一类对象做list list的操作
- 新建一个serviceaccount
创立名为role-2的role,并且通过rolebinding绑定sa-dev-1,只容许对persistentvolumeclaims做update操作。
# kubectl -n db get roleNAME ROLE AGEsa-dev-1 Role/role-1 7d16h# kubectl -n db edit role role-1apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata: name: role-1 namespace: db selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/db/roles/role-1
apiGroups:
- ""
resources: - endpoints #只容许对endpoints资源list
verbs: - list
kubectl -n db create role role-2 --resource=persistentvolumeclaims --verb=update
kubectl create rolebinding role-2-binding --role=role-2 --serviceaccount=db:service-account-web -n db## 7. serviceAccount在qa命名空间内创立ServiceAccount frontend-sa,不容许拜访任何secrets。创立名为frontend-sa的Pod应用该ServiceAccount。并删除qa命名空间下没有应用的sa参照:[https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend-sa
namespace: qa
automountServiceAccountToken: false
...apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: qa
spec:
serviceAccountName: frontend-sa
automountServiceAccountToken: false
...kubectl get sa -n qa 应该是有三个sa frontend-sa default 还有另外一个 失常应该是保留frontend-sa 删除另外两个?然而default有必要删除吗?哈哈哈哈这个中央有点彷徨!我貌似## 8. Dockerfile 和 Pod yaml 检测这个中央也有点郁闷啊。不是每个文件中让批改两处吗?Dockerfile中我是不是要批改三处呢?题目中注明了 根底镜像是ubuntu:16.04 然而给的文件中是latest应该是要批改的吧?
Dockerfile:
去掉两处 USER root
设置根底镜像为 ubuntu:16.04
Pod yaml:
正文掉 privileged 那一行的相干配置[https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#from](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#from)## 9. secretistio-system 命名空间中有一个名称为 db1-test 的 secret, 依照要求实现如下内容:
- ""
- 存储 username 字段到 /home/candidate/user.txt 文件 , password 字段到 /home/candidate/old_pass.txt 文件。用户须要本人创立文件。
- 创立secret from user password。
- 将secret挂载到pod中。
在 istio-system 命名空间创立名称为 db2-test 的 secret,其蕴含如下内容:
pod name secret-podnamespace istio-systemcontainer name dev-containerimage nginx ###也可能是httpd我考的几次都碰到过volume name secret-volumemount path /etc/secret
1. 将secrets base 64写入文件
很多解题都用了kubectl jsonpath的形式集体比拟笨就用了传统的形式:
kubectl get secrets -n istio-system db1-test -o yamlecho "" |base64 -d > /home/candidate/old_pass.txtecho "" |base64 -d > /home/candidate/user.txt
留神程序 查看secrets的时候貌似password都在下面,集体习惯总是容易先搞username。还是容易搞混的
2. 生成secret
参照:https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/
kubectl create secret generic db2-test -n istio-system \--from-literal=username=production-instance \--from-literal=password=KvLftKgs4aHV
3 . secret 挂载到pod中
apiVersion: v1kind: Podmetadata: name: secret-pod namespace: istio-systemspec: containers: - name: dev-container image: nginx ###我的是httpd volumeMounts:- name: secret-volumemountPath: "/etc/secret"readOnly: truevolumes:- name: secret-volumesecret:secretName: db2-test
10. PodSecurityPolicy
根本就是参照官网:
- 创立名为 restrict-policy的PodSecurityPolicy,阻止创立privileged Pod
- 创立名为 restrict-access-role 的ClusterRole容许应用新创建的名为 restrict-policy 的PodSecurityPolicy。
- 在staging命名空间中创立名为 psp-denial-sa 的serviceAccount。
创立名为deny-access-bind的clusterRoleBinding,绑定刚刚创立的serviceAccount和ClusterRole。
vim restrict-policy.ymlapiVersion: policy/v1beta1kind: PodSecurityPolicymetadata: name: restrict-policyspec: privileged: false ###肯定要记得这里false吧? runAsUser: rule: "RunAsAny" fsGroup: rule: "RunAsAny" seLinux: rule: "RunAsAny" supplementalGroups: rule: "RunAsAny"kubectl apply -f restrict-policy.yml
kubectl create clusterrole restrict-access-role --verb=use --resource=psp --resource-name=restrict-policykubectl create sa psp-denial-sa -n stagingkubectl create clusterrolebinding deny-access-bind --clusterrole=restrict-access-role --serviceaccount=staging:psp-denial-sa
注:参考https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems
11. Container(容器)平安 ,删除蕴含Volumes或privileged 的pod
查看production namespace下所有pod是否有特权 Privileged 或者挂载 volume 的 pod
kubectl get pods NAME -n production -o jsonpath={.spec.volumes} | jqkubectl get pods NAME -o yaml -n production | grep "privi.*: true"
而后删除 Privileged 或者挂载 volume 的 Pod。应该是有3个pod 要删除两个?
12. audit--日志审计
日志审计这个中央很刺激。不晓得为什么我改了前两次都没有起来...
当初回忆一下我默认的人家写好的都删了这应该是不对的:
basic policy is provided at /etc/kubernetes/logpolicy'sample-policy.yaml . It onlyspecifies what not to log----指定不记录到日志中的内容- namespaces changes at RequestResponse level
- the request body of persistentwolumes changes inthe namespace front-apps
ConfigMap and Secret changes in all namespaces atthe Metadata level
cat /etc/kubernetes/logpolicy/sample-policy.yaml
apiVersion: audit.k8s.io/v1 # This is required.kind: PolicyomitStages: - "RequestReceived"rules: # 保留指定不记录到日志中的内容(原配置文件内容) - level: RequestResponse resources: - group: "" resources: ["namespaces"] - level: Request resources: - group: "" resources: ["persistentvolumes"] namespaces: ["front-apps"] - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] - level: Metadata omitStages: - "RequestReceived"
kube-apiserver.yaml
配置文件门路,log门路 最大保留天数,保留的审计日志文件的最大数量
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit.log
- --audit-log-maxage=10
--audit-log-maxbackup=1
### 重启服务并验证
systemctl restart kubelet
kubectl apply -f xxx.yaml
tail -f /var/log/kubernetes/audit.log注: 批改之前先做好备份!谨记参照官网:[https://kubernetes.io/docs/tasks/debug-application-cluster/audit/](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)。文件的挂载环境中已配置好了不必批改增加的。## 13. image policy参照:[https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)### admission-control.conf or|dmission-control.yaml
vim /etc/kubernetes/admission-control/admission-control.conf
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:name: ImagePolicyWebhook
path: imagepolicy.conf### imagepolicy.conf | imagepolicy.json
vim /etc/kubernetes/admission-control/imagepolicy.conf
{
"imagePolicy": {
"kubeConfigFile": "/etc/kubernetes/admission-control/imagepolicy_backend.kubeconfig",
"allowTTL": 50,
"denyTTL": 50,
"retryBackoff": 500,
"defaultAllow": false
}
}
注: 只批改了defaultAllow true为false!### imagepolicy_backend.kubeconfig
vim /etc/kubernetes/admission-control/imagepolicy_backend.kubeconfig
apiVersion: v1
kind: Config
clusters:- name: trivy-k8s-webhook
cluster:
certificate-authority: /etc/kubernetes/admission-control/imagepolicywebhook-ca.crt
server: https://acg.trivy.k8s.webhook...
contexts: - name: trivy-k8s-webhook
context:
cluster: trivy-k8s-webhook
user: api-server
current-context: trivy-k8s-webhook
preferences: {}
users: name: api-server
user:
client-certificate: /etc/kubernetes/admission-control/api-server-client.crt
client-key: /etc/kubernetes/admission-control/api-server-client.key只在server中减少了相干配置!
### kube-apiserver
vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --admission-control-config-file=/etc/kubernetes/admission-control/admission-control.conf
--enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
### 验证
systemctl restart kubelet
kubectl apply -f /root/xxx/vulnerable-manifest.yaml
tail -n 10 /var/log/imagepolicy/roadrunner.log## 14. api-server 参数调整这个题没有读懂。没有搞明确 只看懂了apiserver中批改了:
- --enable-admission-plugins=AlwaysAdmit
批改为: --enable-admission-plugins=NodeRestriction
而后依据题目提醒还删除了了一个匿名的clusterrole....接下来就不晓得要怎么搞了。失常就是这个题得分应该是没有的## 15. AppArmor注: 参照[https://kubernetes.io/docs/tutorials/clusters/apparmor/](https://kubernetes.io/docs/tutorials/clusters/apparmor/)### ssh到work节点
cat /etc/apparmor.d/nginx_apparmor
include <tunables/global>
profile nginx-profile-3 flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
sudo apparmor_status | grep nginxsudo -qa /etc/apparmor.d/nginx_apparmor
sudo apparmor_status | grep nginx
nginx-profile-3### exit work节点回到跳板机:
vim https://github.com/lmtbelmont...
apiVersion: v1
kind: Pod
metadata:
name: writedeny
namespace: dev
annotations:
container.apparmor.security.beta.kubernetes.io/busybox: localhost/nginx-profile-3
spec:
containers:- name: busybox
image: busybox:1.33.1
kubecl apply -f ~/nginx-deploy.yml
# 其余:相干材料参考:
- 昕光xg大佬的博客https://blog.csdn.net/u011127242/category_10823035.html
- https://github.com/ggnanasekaran77/cks-exam-tips
- https://github.com/jayendrapatil/kubernetes-exercises/tree/main/topics
- https://github.com/lmtbelmonte/cks-cert
- https://github.com/PatrickPan93/cks-relative
- https://github.com/moabukar/CKS-Exercises-Certified-Kubernetes-Security-Specialist
以上几个资源刷一遍 cks必过!。当然了还有官网文档:https://kubernetes.io/docs/concepts/