问题背景
背景是这样的,我有一套测试用的 K8S 集群,发现无奈失常删除命名空间了,始终处于 Terminating 状态,强制删除也不行。于是,再次手动创立了一个名为“test-b”的命名空间,同样也是不能失常删除。于是,开展了排查。不过,查到最初,发现是个毫无技术含量的“乌龙问题”。后果不重要,重要的是我想把这个过程分享一下。
排查过程
- 失常删除命名空间时,始终处于阻塞状态,只能 Ctrl+ C 掉
[root@k8s-b-master ~]# kubectl delete ns test-b
namespace "test-b" deleted
- 查看状态始终处于 Terminating 状态
[root@k8s-b-master ~]# kubectl get ns
test-b Terminating 18h
- 尝试强制删除,也是始终处于阻塞状态,也只能 Ctrl+ C 掉
[root@k8s-b-master ~]# kubectl delete namespace test-b --grace-period=0 --force
- 查看详细信息发现
[root@k8s-b-master ~]# kubectl describe ns test-b
Name: test-b
Labels: kubernetes.io/metadata.name=test-b
Annotations: <none>
Status: Terminating
Conditions:
Type Status LastTransitionTime Reason Message
---- ------ ------------------ ------ -------
NamespaceDeletionDiscoveryFailure True Fri, 05 May 2023 14:06:52 +0800 DiscoveryFailed Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
NamespaceDeletionGroupVersionParsingFailure False Fri, 05 May 2023 14:06:52 +0800 ParsedGroupVersions All legacy kube types successfully parsed
NamespaceDeletionContentFailure False Fri, 05 May 2023 14:06:52 +0800 ContentDeleted All content successfully deleted, may be waiting on finalization
NamespaceContentRemaining False Fri, 05 May 2023 14:06:52 +0800 ContentRemoved All content successfully removed
NamespaceFinalizersRemaining False Fri, 05 May 2023 14:06:52 +0800 ContentHasNoFinalizers All content-preserving finalizers finished
No resource quota.
No LimitRange resource.
问题出在这里:DiscoveryFailed:Discovery failed for some groups, 1 failing: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
难道 API Server 出问题了?于是持续排查。
- 查看 API Server 是否失常运行
[root@k8s-b-master ~]# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok
从输入来看,controller-manager、scheduler 和 etcd 集群都处于失常状态(Healthy)。
- 持续查看 API Server 的日志看看是否有谬误或异样
# 获取 API Server Pod 的名称:[root@k8s-b-master ~]# kubectl get pods -n kube-system -l component=kube-apiserver -o name
pod/kube-apiserver-k8s-b-master
# 查看 API Server 的日志:[root@k8s-b-master ~]# kubectl get pods -n kube-system -l component=kube-apiserver -o name
pod/kube-apiserver-k8s-b-master
[root@k8s-b-master ~]# kubectl logs -n kube-system kube-apiserver-k8s-b-master
...
...
W0506 01:00:12.965627 1 handler_proxy.go:105] no RequestInfo found in the context
E0506 01:00:12.965711 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type: X-Content-Type-Options:[nosniff]]
I0506 01:00:12.965722 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0506 01:00:12.968678 1 handler_proxy.go:105] no RequestInfo found in the context
E0506 01:00:12.968709 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0506 01:00:12.968719 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0506 01:00:43.794546 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0506 01:00:44.023629 1 controller.go:616] quota admission added evaluator for: endpoints
W0506 01:01:12.965985 1 handler_proxy.go:105] no RequestInfo found in the context
E0506 01:01:12.966062 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type: X-Content-Type-Options:[nosniff]]
I0506 01:01:12.966069 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0506 01:01:12.969496 1 handler_proxy.go:105] no RequestInfo found in the context
E0506 01:01:12.969527 1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
...
...
# 前面都是这样的 Log
...
...
从输入日志来看,问题仿佛与 metrics.k8s.io/v1beta1 无关,这个 API 被用于收集 Kubernetes 集群的度量数据。可能是因为度量服务器 (metrics-server) 呈现故障,无奈满足 API Server 的申请,导致 API Server 无奈解决申请。
- k8s 默认是没有 metrics-server 组件的呀,还是看看:
[root@k8s-b-master ~]# kubectl get pods -n kube-system -l k8s-app=metrics-server
No resources found in kube-system namespace.
kube-system 命名空间中没有找到标签为 k8s-app=metrics-server 的 Pod,这很失常呀,K8S 原本就是默认没有装置 Metrics Server 组件的,为什么当初又依赖了?
查到这里,我忽然想起了前段时间部署过 kube-prometheus,过后 kube-state-metrics 拉取镜像失败没有失常运行,因为是长期测试环境,起初就没管了,工夫一长竟然把这事给忘了。
于是看了一下,发现还真是:
[root@k8s-b-master ~]# kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 14 (30m ago) 5d19h
alertmanager-main-1 2/2 Running 14 (29m ago) 5d19h
alertmanager-main-2 2/2 Running 14 (29m ago) 5d19h
blackbox-exporter-69f4d86566-wn6q7 3/3 Running 21 (30m ago) 5d19h
grafana-56c4977497-2rjmt 1/1 Running 7 (29m ago) 5d19h
kube-state-metrics-56f8746666-lsps6 2/3 ImagePullBackOff 14 (29m ago) 5d19h # 拉取镜像失败导致没有失常运行
node-exporter-d8c5k 2/2 Running 14 (30m ago) 5d19h
node-exporter-gvfx2 2/2 Running 14 (30m ago) 5d19h
node-exporter-gxccx 2/2 Running 14 (29m ago) 5d19h
node-exporter-h292z 2/2 Running 14 (30m ago) 5d19h
node-exporter-mztj6 2/2 Running 14 (29m ago) 5d19h
node-exporter-rvfz6 2/2 Running 14 (29m ago) 5d19h
node-exporter-twg9q 2/2 Running 13 (29m ago) 5d19h
prometheus-adapter-77f56b865b-76nzk 0/1 ImagePullBackOff 0 5d19h
prometheus-adapter-77f56b865b-wbcwl 0/1 ImagePullBackOff 0 5d19h
prometheus-k8s-0 2/2 Running 14 (30m ago) 5d19h
prometheus-k8s-1 2/2 Running 14 (29m ago) 5d19h
prometheus-operator-788dd7cb76-85zwj 2/2 Running 14 (29m ago) 5d19h
反正也不必这套环境了,干它:
[root@k8s-b-master ~]# kubectl delete -f kube-prometheus-main/manifests/
[root@k8s-b-master ~]# kubectl delete -f kube-prometheus-main/manifests/setup/
再次查看命名空间,test- b 这个命名空间也随之能失常删除掉了,问题解决:
[root@k8s-b-master ~]# kubectl get ns
NAME STATUS AGE
default Active 5d22h
kube-node-lease Active 5d22h
kube-public Active 5d22h
kube-system Active 5d22h
[root@k8s-b-master ~]#
最初的觉醒
联合官网文档相干材料和本人平时的教训反思了一下这个事件,kube-state-metrics 组件是负责监控 K8S 集群的状态,并且它会定期获取集群内各个资源的指标数据,这些指标数据会被 Metrics Server 组件应用。当 kube-state-metrics 组件无奈失常工作时,Metrics Server 组件就无奈获取到指标数据,从而导致 Metrics Server 组件无奈失常运行。在 K8S 集群中,很多组件都会应用 Metrics Server 组件提供的指标数据,例如 HPA、kubelet 等。如果 Metrics Server 组件无奈失常运行,可能会导致其余组件呈现问题,包含删除命名空间时提醒谬误。也就是说 Metrics Server 组件无奈失常运行,导致了 API Server 组件在解决其它一些申请时可能会失败,从而产生了无奈失常删除命名空间的状况。
本文转载于 WX 公众号:不背锅运维(喜爱的盆友关注咱们):https://mp.weixin.qq.com/s/JLeL4j-mX2x2BBZxWu414g