开篇
引言:
- 磨刀不误砍柴工
- 工欲善其事必先利其器
- 第一篇:《K8S 实用工具之一 - 如何合并多个 kubeconfig?》
- 第二篇:《K8S 实用工具之二 - 终端 UI K9S》
- 第三篇:《K8S 实用工具之三 - 图形化 UI Lens》
在《K8S 实用工具之一 - 如何合并多个 kubeconfig?》一文中,咱们介绍了 kubectl 的插件管理工具 krew。接下来就趁势介绍几个实用的 kubectl 插件。
kubectl
实用插件
access-matrix
显示服务器资源的 RBAC 拜访矩阵。
您是否已经想过您对所提供的 kubernetes 集群领有哪些拜访权限?对于单个资源,您能够应用kubectl auth can-i
列表部署,但兴许您正在寻找一个残缺的概述?这就是它的作用。它列出以后用户和所有服务器资源的拜访权限,相似于kubectl auth can-i --list
。
装置
kubectl krew install access-matrix
应用
Review access to cluster-scoped resources $ kubectl access-matrix Review access to namespaced resources in 'default' $ kubectl access-matrix --namespace default Review access as a different user $ kubectl access-matrix --as other-user Review access as a service-account $ kubectl access-matrix --sa kube-system:namespace-controller Review access for different verbs $ kubectl access-matrix --verbs get,watch,patch Review access rights diff with another service account $ kubectl access-matrix --diff-with sa=kube-system:namespace-controller
显示成果如下:
ca-cert
打印以后集群的 PEM CA 证书
装置
kubectl krew install ca-cert
应用
kubectl ca-cert
cert-manager
这个不必多介绍了吧?赫赫有名的 cert-manager,用来治理集群内的证书资源。
须要配合在 K8S 集群中装置 cert-manager 来应用。前面有工夫再具体介绍
cost
查看集群老本信息。
kubectl-cost
是一个 kubectl 插件,通过 kubeccost api 提供简略的 CLI 拜访 Kubernetes 老本调配指标。它容许开发人员、devops 和其他人疾速确定 Kubernetes 工作负载的老本和效率。
装置
装置 Kubecost (Helm 的 options 能够看这里:cost-analyzer-helm-chart)
helm repo add kubecost https://kubecost.github.io/cost-analyzer/helm upgrade -i --create-namespace kubecost kubecost/cost-analyzer --namespace kubecost --set kubecostToken="a3ViZWN0bEBrdWJlY29zdC5jb20=xm343yadf98"
部署实现显示如下:
NAME: kubecostLAST DEPLOYED: Sat Nov 27 13:44:30 2021NAMESPACE: kubecostSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:--------------------------------------------------Kubecost has been successfully installed. When pods are Ready, you can enable port-forwarding with the following command: kubectl port-forward --namespace kubecost deployment/kubecost-cost-analyzer 9090Next, navigate to http://localhost:9090 in a web browser.Having installation issues? View our Troubleshooting Guide at http://docs.kubecost.com/troubleshoot-install
装置 kubectl cost
kubectl krew install cost
应用
应用能够间接通过浏览器来看:
ctx
在 kubeconfig 中切换上下文
装置
kubectl krew install ctx
应用
应用也很简略,执行 kubectl ctx
而后抉择要切换到哪个 context 即可。
$ kubectl ctxSwitched to context "multicloud-k3s".
deprecations
查看集群中曾经弃用的对象。个别用在降级 K8S 之前做查看。又叫 KubePug
装置
kubectl krew install deprecations
应用
应用也很简略,执行 kubectl deprecations
即可,而后如上面所示,它会通知你哪些 API 曾经弃用了,不便布局 K8S 降级布局。
$ kubectl deprecationsW1127 16:04:58.641429 28561 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+W1127 16:04:58.664058 28561 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+W1127 16:04:59.622247 28561 warnings.go:70] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIServiceW1127 16:05:00.777598 28561 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinitionW1127 16:05:00.808486 28561 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 IngressRESULTS:Deprecated APIs:PodSecurityPolicy found in policy/v1beta1 ├─ PodSecurityPolicy governs the ability to make requests that affect the Security Context that will be applied to a pod and container. Deprecated in 1.21. -> GLOBAL: kube-prometheus-stack-admission -> GLOBAL: loki-grafana-test -> GLOBAL: loki-promtail -> GLOBAL: loki -> GLOBAL: loki-grafana -> GLOBAL: prometheus-operator-grafana-test -> GLOBAL: prometheus-operator-alertmanager -> GLOBAL: prometheus-operator-grafana -> GLOBAL: prometheus-operator-prometheus -> GLOBAL: prometheus-operator-prometheus-node-exporter -> GLOBAL: prometheus-operator-kube-state-metrics -> GLOBAL: prometheus-operator-operator -> GLOBAL: kubecost-grafana -> GLOBAL: kubecost-cost-analyzer-pspComponentStatus found in /v1 ├─ ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ -> GLOBAL: controller-manager -> GLOBAL: schedulerDeleted APIs:
还能够和 CI 流程联合起来应用:
$ kubectl deprecations --input-file=./deployment/ --error-on-deleted --error-on-deprecated
df-pv
装置
kubectl krew install df-pv
应用
执行 kubectl df-pv
:
get-all
真正能 get 到 Kubernetes 的所有资源。
装置
kubectl krew install get-all
应用
间接执行 kubectl get-all
, 示例成果如下:
images
显示集群中应用的容器镜像。
装置
kubectl krew install images
应用
执行 kubectl images -A
,后果如下:
kubesec-scan
应用 kubesec.io 扫描 Kubernetes 资源。
装置
kubectl krew install kubesec-scan
应用
示例如下:
$ kubectl kubesec-scan statefulset loki -n loki-stackscanning statefulset loki in namespace loki-stackkubesec.io score: 4-----------------Advise1. .spec .volumeClaimTemplates[] .spec .accessModes | index("ReadWriteOnce")2. containers[] .securityContext .runAsNonRoot == trueForce the running image to run as a non-root user to ensure least privilege3. containers[] .securityContext .capabilities .dropReducing kernel capabilities available to a container limits its attack surface4. containers[] .securityContext .runAsUser > 10000Run as a high-UID user to avoid conflicts with the host's user table5. containers[] .securityContext .capabilities .drop | index("ALL")Drop all capabilities and add only those required to reduce syscall attack surface
neat
从Kubernetes显示中删除芜杂以使其更具可读性。
装置
kubectl krew install neat
应用
示例如下:
咱们不关注的一些信息如:creationTimeStamp
、 managedFields
等被移除了。很清新
$ kubectl neat get -- pod loki-0 -oyaml -n loki-stackapiVersion: v1kind: Podmetadata: annotations: checksum/config: b9ab988df734dccd44833416670e70085a2a31cfc108e68605f22d3a758f50b5 prometheus.io/port: http-metrics prometheus.io/scrape: "true" labels: app: loki controller-revision-hash: loki-79684c849 name: loki release: loki statefulset.kubernetes.io/pod-name: loki-0 name: loki-0 namespace: loki-stackspec: containers: - args: - -config.file=/etc/loki/loki.yaml image: grafana/loki:2.3.0 livenessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 45 name: loki ports: - containerPort: 3100 name: http-metrics readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 45 securityContext: readOnlyRootFilesystem: true volumeMounts: - mountPath: /etc/loki name: config - mountPath: /data name: storage - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jhsvm readOnly: true hostname: loki-0 preemptionPolicy: PreemptLowerPriority priority: 0 securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 serviceAccountName: loki subdomain: loki-headless terminationGracePeriodSeconds: 4800 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: config secret: secretName: loki - name: storage - name: kube-api-access-jhsvm projected: sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: fieldPath: metadata.namespace path: namespace
node-shell
通过 kubectl 在一个 node 上生成一个 root shell
装置
kubectl krew install node-shell
应用
示例如下:
$ kubectl node-shell instance-ykx0ofnsspawning "nsenter-fr393w" on "instance-ykx0ofns"If you don't see a command prompt, try pressing enter.root@instance-ykx0ofns:/# hostnameinstance-ykx0ofnsroot@instance-ykx0ofns:/# ifconfig...eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.64.4 netmask 255.255.240.0 broadcast 192.168.79.255 inet6 fe80::f820:20ff:fe16:3084 prefixlen 64 scopeid 0x20<link> ether fa:20:20:16:30:84 txqueuelen 1000 (Ethernet) RX packets 24386113 bytes 26390915146 (26.3 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 18840452 bytes 3264860766 (3.2 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0...root@instance-ykx0ofns:/# exitlogoutpod default/nsenter-fr393w terminated (Error)pod "nsenter-fr393w" deleted
ns
切换 Kubernetes 的 ns。
装置
kubectl krew install ns
应用
$ kubectl ns loki-stackContext "multicloud-k3s" modified.Active namespace is "loki-stack".$ kubectl get podNAME READY STATUS RESTARTS AGEloki-promtail-fbbjj 1/1 Running 0 12dloki-promtail-sx5gj 1/1 Running 0 12dloki-0 1/1 Running 0 12dloki-grafana-8bffbb679-szdpj 1/1 Running 0 12dloki-promtail-hmc26 1/1 Running 0 12dloki-promtail-xvnbc 1/1 Running 0 12dloki-promtail-5d5h8 1/1 Running 0 12d
outdated
查找集群中运行的过期容器镜像。
装置
kubectl krew install outdated
应用
$ kubectl outdatedImage Current Latest Behindindex.docker.io/rancher/klipper-helm v0.6.6-build202110220.6.8-build202111232docker.io/rancher/klipper-helm v0.6.4-build202108130.6.8-build202111234docker.io/alekcander/k3s-flannel-fixer 0.0.2 0.0.2 0docker.io/rancher/metrics-server v0.3.6 0.4.1 1docker.io/rancher/coredns-coredns 1.8.3 1.8.3 0docker.io/rancher/library-traefik 2.4.8 2.4.9 1docker.io/rancher/local-path-provisioner v0.0.19 0.0.20 1docker.io/grafana/promtail 2.1.0 2.4.1 5docker.io/grafana/loki 2.3.0 2.4.1 2quay.io/kiwigrid/k8s-sidecar 1.12.3 1.14.2 5docker.io/grafana/grafana 8.1.6 8.3.0-beta1 8registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.18.1 1.3.0 9registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.20.0 0.23.0 5registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.0.1 0.0.1 0registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v1.9.4 2.0.0-beta 5registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v2.15.2 2.31.1 38registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.35.0 0.42.1 11docker.io/kiwigrid/k8s-sidecar 0.1.20 1.14.2 46docker.io/grafana/grafana 6.5.2 8.3.0-beta1 75registry.cn-hangzhou.aliyuncs.com/kubeapps/quay... v0.35.0 0.42.1 12docker.io/squareup/ghostunnel v1.5.2 1.5.2 0docker.io/grafana/grafana 8.1.2 8.3.0-beta1 12docker.io/kiwigrid/k8s-sidecar 1.12.3 1.14.2 5docker.io/prom/prometheus v2.22.2 2.31.1 21
popeye(鼎力水手)
扫描集群以发现潜在的资源问题。就是 K9S 也在应用的 popeye。
Popeye 是一个实用程序,它扫描实时的Kubernetes集群,并报告部署的资源和配置的潜在问题。它依据已部署的内容而不是磁盘上的内容来清理集群。通过扫描集群,它能够检测到谬误配置,并帮忙您确保最佳实际曾经到位,从而防止将来的麻烦。它旨在缩小人们在野外操作Kubernetes集群时所面临的认知过载。此外,如果您的集群应用度量服务器,它会报告调配的资源超过或低于调配的资源,并试图在集群耗尽容量时正告您。
Popeye 是一个只读的工具,它不会扭转任何你的Kubernetes资源在任何形式!
装置
kubectl krew install popeye
应用
如下:
❯ kubectl popeye ___ ___ _____ _____ K .-'-.| _ \___| _ \ __\ \ / / __| 8 __| `\| _/ _ \ _/ _| \ V /| _| s `-,-`--._ `\|_| \___/_| |___| |_| |___| [] .->' a `|-' Biffs`em and Buffs`em! `=/ (__/_ / \_, ` _) `----; |DAEMONSETS (1 SCANNED) 0 1 0 ✅ 0 0┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅ · loki-stack/loki-promtail....................................................................... [POP-404] Deprecation check failed. Unable to assert resource version. promtail [POP-106] No resources requests/limits defined.DEPLOYMENTS (1 SCANNED) 0 1 0 ✅ 0 0┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅ · loki-stack/loki-grafana........................................................................ [POP-404] Deprecation check failed. Unable to assert resource version. grafana [POP-106] No resources requests/limits defined. grafana-sc-datasources [POP-106] No resources requests/limits defined.PODS (7 SCANNED) 0 7 0 ✅ 0 0┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅ · loki-stack/loki-0.............................................................................. [POP-206] No PodDisruptionBudget defined. [POP-301] Connects to API Server? ServiceAccount token is mounted. loki [POP-106] No resources requests/limits defined. · loki-stack/loki-grafana-8bffbb679-szdpj........................................................ [POP-206] No PodDisruptionBudget defined. [POP-301] Connects to API Server? ServiceAccount token is mounted. grafana [POP-106] No resources requests/limits defined. [POP-105] Liveness probe uses a port#, prefer a named port. [POP-105] Readiness probe uses a port#, prefer a named port. grafana-sc-datasources [POP-106] No resources requests/limits defined. · loki-stack/loki-promtail-5d5h8................................................................. [POP-206] No PodDisruptionBudget defined. [POP-301] Connects to API Server? ServiceAccount token is mounted. [POP-302] Pod could be running as root user. Check SecurityContext/image. promtail [POP-106] No resources requests/limits defined. [POP-103] No liveness probe. [POP-306] Container could be running as root user. Check SecurityContext/Image.SUMMARY┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅Your cluster score: 80 -- B o .-'-. o __| B `\ o `-,-`--._ `\ [] .->' a `|-' `=/ (__/_ / \_, ` _) `----; |
resource-capacity
提供资源申请、限度和使用率的概览。
这是一个简略的 CLI,它提供 Kubernetes 集群中资源申请、限度和利用率的概览。它试图将来自 kubectl top 和 kubectl describe 的输入的最好局部组合成一个简略易用的 CLI,专一于集群资源。
装置
kubectl krew install resource-capacity
应用
上面示例是看 node 的,也能够看 pod,通过 label 筛选, 并排序等性能。
$ kubectl resource-capacityNODE CPU REQUESTS CPU LIMITS MEMORY REQUESTS MEMORY LIMITS* 710m (14%) 300m (6%) 535Mi (6%) 257Mi (3%)09b2brd7robnn5zi-1106883 0Mi (0%) 0Mi (0%) 0Mi (0%) 0Mi (0%)hecs-348550 100m (10%) 100m (10%) 236Mi (11%) 27Mi (1%)instance-wy7ksibk 310m (31%) 0Mi (0%) 174Mi (16%) 0Mi (0%)instance-ykx0ofns 200m (20%) 200m (20%) 53Mi (5%) 53Mi (5%)izuf656om146vu1n6pd6lpz 100m (10%) 0Mi (0%) 74Mi (3%) 179Mi (8%)
score
Kubernetes 动态代码剖析。
装置
kubectl krew install score
应用
也是能够和 CI 进行集成的。示例如下:
sniff
强烈推荐,之前有次 POD 网络呈现问题就是通过这个帮忙来进行剖析的。它会应用 tcpdump 和 wireshark 在 pod 上启动近程抓包
装置
kubectl krew install sniff
应用
kubectl < 1.12:kubectl plugin sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]kubectl >= 1.12:kubectl sniff <POD_NAME> [-n <NAMESPACE_NAME>] [-c <CONTAINER_NAME>] [-i <INTERFACE_NAME>] [-f <CAPTURE_FILTER>] [-o OUTPUT_FILE] [-l LOCAL_TCPDUMP_FILE] [-r REMOTE_TCPDUMP_FILE]
如下:
starboard
也是一个平安扫描工具。
装置
kubectl krew install starboard
应用
kubectl starboard report deployment/nginx > nginx.deploy.html
就能够生成一份平安报告:
tail
- kubernetes tail
Kubernetes tail。将所有匹配 pod 的所有容器的日志流。按 service、replicaset、deployment 等匹配 pod。调整到变动的集群——当pod落入或退出抉择时,将从日志中增加或删除它们。
装置
kubectl krew install tail
应用
# 匹配所有 pod $ kubectl tail # 配置 staging ns 的所有 pod $ kubectl tail --ns staging # 匹配所有 ns 的 rs name 为 worker 的 pod $ kubectl tail --rs workers # 匹配 staging ns 的 rs name 为 worker 的 pod $ kubectl tail --rs staging/workers # 匹配 deploy 属于 webapp,且 svc 属于 frontend 的 pod $ kubectl tail --svc frontend --deploy webapp
应用成果如下,最后面会加上日志对应的 pod:
trace
应用零碎工具跟踪 Kubernetes pod 和 node。
kubectl trace 是一个 kubectl 插件,它容许你在 Kubernetes 集群中调度 bpftrace 程序的执行。
装置
kubectl krew install trace
应用
这块不太理解,就不多做评论了。
kubectl trace run ip-180-12-0-152.ec2.internal -f read.bt
tree
一个 kubectl
插件,通过对 Kubernetes 对象的 ownersReferences
来摸索它们之间的所有权关系。
装置
应用 krew 插件管理器装置:
kubectl krew install treekubectl tree --help
应用
DaemonSet 示例:
Knative Service 示例:
tunnel
集群和你本人机器之间的反向隧道.
它容许您将计算机作为集群中的服务公开,或者将其公开给特定的部署。这个我的项目的目标是为这个特定的问题提供一个整体的解决方案(从kubernetes pod拜访本地机器)。
装置
kubectl krew install tunnel
应用
以下命令将容许集群中的 pod 通过 http 拜访您的本地 web 应用程序(监听端口 8000)(即 kubernetes 应用程序能够发送申请到myapp:8000)
ktunnel expose myapp 80:8000ktunnel expose myapp 80:8000 -r #deployment & service will be reused if exists or they will be created
warp
在 Pod 中同步和执行本地文件
kubectl (Kubernetes CLI)插件,就像 kubectl 运行与 rsync。
它创立长期 Pod,并将本地文件同步到所需的容器,并执行任何命令。
例如,这能够用于在 Kubernetes 中构建和运行您的本地我的项目,其中有更多的资源、所需的架构等,同时在本地应用您的首选编辑器。
装置
kubectl krew install warp
应用
# 在 ubuntu 镜像中启动 bash。并将当前目录中的文件同步到容器中kubectl warp -i -t --image ubuntu testing -- /bin/bash# 在 node 容器中启动 nodejs 我的项目cd examples/nodejskubectl warp -i -t --image node testing-node -- npm run watch
who-can
显示谁具备拜访 Kubernetes 资源的 RBAC 权限
装置
kubectl krew install who-can
应用
$ kubectl who-can create ns all-namespacesNo subjects found with permissions to create ns assigned through RoleBindingsCLUSTERROLEBINDING SUBJECT TYPE SA-NAMESPACEcluster-admin system:masters Grouphelm-kube-system-traefik-crd helm-traefik-crd ServiceAccount kube-systemhelm-kube-system-traefik helm-traefik ServiceAccount kube-system
EOF
三人行, 必有我师; 常识共享, 天下为公. 本文由东风微鸣技术博客 EWhisper.cn 编写.