共计 16625 个字符,预计需要花费 42 分钟才能阅读完成。
作者:老 Z,中电信数智科技有限公司山东分公司运维架构师,云原生爱好者,目前专一于云原生运维,云原生畛域技术栈波及 Kubernetes、KubeSphere、DevOps、OpenStack、Ansible 等。
前言
测试服务器配置
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用处 |
---|---|---|---|---|---|---|
zdeops-master | 192.168.9.9 | 2 | 4 | 40 | 200 | Ansible 运维管制节点 |
ks-k8s-master-0 | 192.168.9.91 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
ks-k8s-master-1 | 192.168.9.92 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
ks-k8s-master-2 | 192.168.9.93 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
storage-node-0 | 192.168.9.95 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
storage-node-0 | 192.168.9.96 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
storage-node-0 | 192.168.9.97 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
harbor | 192.168.9.89 | 2 | 8 | 40 | 200 | Harbor |
共计 | 8 | 22 | 84 | 320 | 2800 |
测试环境波及软件版本信息
- 操作系统:CentOS-7.9-x86_64
- Ansible:2.8.20
- KubeSphere:3.3.0
- Kubernetes:v1.24.1
- GlusterFS:9.5.1
- ElasticSearch:7.17.5
- Harbor:2.5.1
简介
生产环境 KubeSphere 3.3.0 部署的 Kubernetes 集群在平安评估的时候发现安全漏洞,其中一项破绽提醒 SSL/TLS 协定信息泄露破绽 (CVE-2016-2183)。
本文详细描述了破绽产生起因、破绽修复计划、破绽修复的操作流程以及注意事项。
破绽信息及修复计划
破绽详细信息
破绽报告中波及破绽 SSL/TLS 协定信息泄露破绽 (CVE-2016-2183) 的具体信息如下:
破绽剖析
- 剖析破绽报告信息,咱们发现破绽波及以下端口和服务:
端口号 | 服务 |
---|---|
2379/2380 | Etcd |
6443 | kube-apiserver |
10250 | kubelet |
10257 | kube-controller |
10259 | kube-scheduler |
- 在破绽节点 (任意 Master 节点) 查看、确认端口号对应的服务:
# ETCD
[root@ks-k8s-master-0 ~]# ss -ntlup | grep Etcd | grep -v "127.0.0.1"
tcp LISTEN 0 128 192.168.9.91:2379 *:* users:(("Etcd",pid=1341,fd=7))
tcp LISTEN 0 128 192.168.9.91:2380 *:* users:(("Etcd",pid=1341,fd=5))
# kube-apiserver
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 6443
tcp LISTEN 0 128 [::]:6443 [::]:* users:(("kube-apiserver",pid=1743,fd=7))
# kubelet
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10250
tcp LISTEN 0 128 [::]:10250 [::]:* users:(("kubelet",pid=1430,fd=24))
# kube-controller
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10257
tcp LISTEN 0 128 [::]:10257 [::]:* users:(("kube-controller",pid=19623,fd=7))
# kube-scheduler
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10259
tcp LISTEN 0 128 [::]:10259 [::]:* users:(("kube-scheduler",pid=1727,fd=7))
- 破绽起因:
相干服务配置文件里应用了 IDEA、DES 和 3DES 等算法。
- 利用测试工具验证破绽:
能够应用 Nmap 或是 openssl 进行验证,本文重点介绍 Nmap 的验证形式。
留神:openssl 的形式输入太多且不好直观判断,有趣味的能够参考命令
openssl s_client -connect 192.168.9.91:10257 -cipher "DES:3DES"
。
在任意节点装置测试工具 Nmap,并执行测试命令。
谬误的姿态,仅用于阐明抉择 Nmap 版本很重要,实际操作中不要执行。
# 用 CentOS 默认源装置 nmap
yum install nmap
# 执行针对 2379 端口的 ssl-enum-ciphers 检测
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 后果输入如下
Starting Nmap 6.40 (http://nmap.org) at 2023-02-13 14:14 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00013s latency).
PORT STATE SERVICE
2379/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.30 seconds
留神: 剖析输入的后果发现并没有任何正告信息。起因是 Nmap 版本过低,须要 7.x 以上才能够。
正确的姿态,理论执行的操作:
# 从 Nmap 官网,下载安装新版软件包
rpm -Uvh https://nmap.org/dist/nmap-7.93-1.x86_64.rpm
# 执行针对 2379 端口的 ssl-enum-ciphers 检测
# nmap -sV --script ssl-enum-ciphers -p 2379 192.168.9.91 (该命令输入更为具体也更加耗时,为节俭篇幅应用上面简略输入的模式)
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 输入后果如下
Starting Nmap 7.93 (https://nmap.org) at 2023-02-13 17:28 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00013s latency).
PORT STATE SERVICE
2379/tcp open Etcd-client
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (ecdh_x25519) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.66 seconds
# 执行针对 2380 端口的 ssl-enum-ciphers 检测
nmap --script ssl-enum-ciphers -p 2380 192.168.9.91
# 输入后果如下
Starting Nmap 7.93 (https://nmap.org) at 2023-02-13 17:28 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00014s latency).
PORT STATE SERVICE
2380/tcp open Etcd-server
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (ecdh_x25519) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.64 seconds
# 执行针对 6443 端口的 ssl-enum-ciphers 检测(10250/10257/10259 端口扫描后果雷同)nmap --script ssl-enum-ciphers -p 6443 192.168.9.91
# 输入后果如下
Starting Nmap 7.93 (https://nmap.org) at 2023-02-13 17:29 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00014s latency).
PORT STATE SERVICE
6443/tcp open sun-sr-https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| compressors:
| NULL
| cipher preference: server
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.66 seconds
留神:扫描后果中重点关注
warnings
,64-bit block cipher 3DES vulnerable to SWEET32 attack
。
破绽修复计划
破绽扫描报告中提到的修复计划并不适用于 Etcd、Kubernetes 相干服务。
针对于 Etcd、Kubernetes 等服务无效的修复伎俩是批改服务配置文件,禁用 3DES 相干的加密配置。
Cipher Suites 配置参数的抉择,能够参考 ETCD 官网文档或是 IBM 公有云文档,网上搜到的很多配置都是参考的 IBM 的文档,想省事的能够拿来即用。
对于配置参数的最终抉择,我采纳了最笨的办法,即把扫描后果列出的 Cipher 值拼接起来。因为不分明影响范畴,所以激进的采纳了在原有配置根底上删除 3DES 相干的配置。
上面的内容整顿了 Cipher Suites 配置参数的可参考配置。
- 原始扫描后果中的 Cipher Suites 配置:
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- 原始扫描后果去掉 3DES 的 Cipher Suites 配置:
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
应用该计划时必须 严格依照以下程序配置,我在测试时发现程序不统一会导致 Etcd 服务重复重启。
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
尽管 CIPHER 配置一样,然而在应用上面的程序时,Etcd 服务重复重启,我排查了良久也没确定根因。也可能是我写的有问题,然而比对屡次也没发现异常,只能临时是认为是程序造成的。
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
留神:只有 Etcd 服务受到程序的影响,kube 相干组件程序不同也没发现异常。
- IBM 相干文档中的 Cipher Suites 配置:
网上搜到的参考文档使用率最高的配置。理论测试也的确好用,服务都能失常启动,没有发现 Etcd 一直重启的景象。如果没有非凡需要,能够采纳该计划,毕竟抉择越少出安全漏洞的几率也越小。
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
破绽修复
倡议应用以下程序修复破绽:
- Etcd
- kube-apiserver
- kube-controller
- kube-scheduler
- kubelet
下面的操作流程中,重点是将 Etcd 的修复重启放在最后面执行。因为 kube 等组件的运行依赖于 Etcd,我在验证时最初降级的 Etcd,当 Etcd 启动失败后(重复重启),其余服务因为无奈连贯 Etcd,造成服务异样进行。所以先确保 Etcd 运行失常再去修复其余组件。
本文所有操作仅演示了一个节点的操作方法,多节点存在破绽时请按组件顺次执行,先修复实现一个组件,确认无误后再去修复另一个组件。
以下操作是我实战验证过的教训,仅供参考,生产环境请肯定要充沛验证、测试后再执行!
修复 Etcd
- 编辑 Etcd 配置文件 /etc/Etcd.env:
KubeSpere 3.3.0 采纳二进制的形式部署的 Etcd,相干配置文件蕴含 /etc/systemd/system/Etcd.service 和 /etc/Etcd.env,参数配置保留在 /etc/Etcd.env。
# 在文件最初减少配置(用 cat 命令主动配置)
cat >> /etc/Etcd.env << "EOF"
# TLS CIPHER SUITES settings
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
EOF
- 重启 Etcd 服务:
# 重启服务
systemctl restart Etcd
# 验证服务已启动
ss -ntlup | grep Etcd
# 正确的后果如下
tcp LISTEN 129 128 192.168.9.91:2379 *:* users:(("Etcd",pid=40160,fd=7))
tcp LISTEN 0 128 127.0.0.1:2379 *:* users:(("Etcd",pid=40160,fd=6))
tcp LISTEN 0 128 192.168.9.91:2380 *:* users:(("Etcd",pid=40160,fd=5))
# 继续观测 确保服务没有重复重启
watch -n 1 -d 'ss -ntlup | grep Etcd'
留神:如果是多节点模式,肯定要所有节点都批改完配置文件,而后,所有节点同时重启 Etcd 服务。重启过程中会造成 Etcd 服务中断,生产环境审慎操作。
- 验证破绽是否修复:
# 执行扫描命令
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 输入后果如下
Starting Nmap 7.93 (https://nmap.org) at 2023-02-14 17:48 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00015s latency).
PORT STATE SERVICE
2379/tcp open Etcd-client
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
|_ least strength: A
Nmap done: 1 IP address (1 host up) scanned in 0.64 seconds
# 为了节俭篇幅,2380 端口扫描残缺输入后果略, 理论后果与 2379 端口统一
# 能够执行过滤输入的扫描命令,如果以下命令返回值为空,阐明破绽修复
nmap --script ssl-enum-ciphers -p 2380 192.168.9.91 | grep SWEET32
修复 kube-apiserver
- 编辑 kube-apiserver 配置文件
/etc/kubernetes/manifests/kube-apiserver.yaml
:
# 新增配置(在原文件 47 行前面减少一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的成果如下(不截图了,减少了行号显示用来辨别)46 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
47 - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
48 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-apiserver:
不须要手动重启,因为是动态 Pod,Kubernetes 会主动重启。
- 验证破绽:
# 执行扫描命令
nmap --script ssl-enum-ciphers -p 6443 192.168.9.91
# 输入后果如下
Starting Nmap 7.93 (https://nmap.org) at 2023-02-14 09:22 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00015s latency).
PORT STATE SERVICE
6443/tcp open sun-sr-https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: server
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: A
Nmap done: 1 IP address (1 host up) scanned in 0.68 seconds
留神:比照之前的破绽告警信息,扫描后果中曾经不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修复胜利。
修复 kube-controller
- 编辑 kube-controller 配置文件
/etc/kubernetes/manifests/kube-controller-manager.yaml
:
# 新增配置(在原文件 33 行前面减少一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的成果如下(不截图了,减少了行号显示用来辨别)33 - --use-service-account-credentials=true
34 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-controller:
不须要手动重启,因为是动态 Pod,Kubernetes 会主动重启。
- 验证破绽:
# 执行残缺扫描命令
nmap --script ssl-enum-ciphers -p 10257 192.168.9.91
# 为了节俭篇幅,残缺输入后果略, 理论后果与 kube-apiserver 的统一
# 能够执行过滤输入的扫描命令,如果以下命令返回值为空,阐明破绽修复
nmap --script ssl-enum-ciphers -p 10257 192.168.9.91 | grep SWEET32
留神:比照之前的破绽告警信息,扫描后果中曾经不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修复胜利。
修复 kube-scheduler
- 编辑 kube-scheduler 配置文件
/etc/kubernetes/manifests/kube-scheduler.yaml
:
# 新增配置(在原文件 19 行前面减少一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的成果如下(不截图了,减少了行号显示用来辨别)19 - --leader-elect=true
20 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-scheduler:
不须要手动重启,因为是动态 Pod,Kubernetes 会主动重启。
- 验证破绽:
# 执行残缺扫描命令
nmap --script ssl-enum-ciphers -p 10259 192.168.9.91
# 为了节俭篇幅,残缺输入后果略, 理论后果与 kube-apiserver 的统一
# 能够执行过滤输入的扫描命令,如果以下命令返回值为空,阐明破绽修复
nmap --script ssl-enum-ciphers -p 10259 192.168.9.91 | grep SWEET32
留神:比照之前的破绽告警信息,扫描后果中曾经不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修复胜利。
修复 kubelet
- 编辑 kubelet 配置文件
/var/lib/kubelet/config.yaml
:
# 在配置文件最初增加
tlsCipherSuites: [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA]
提醒:更多的 cipher suites 配置,请参考 Kubernetes 官网文档。
- 重启 kubelet:
systemctl restart kubelet
重启有危险,操作需谨慎!
- 验证破绽:
# 执行残缺扫描命令
nmap --script ssl-enum-ciphers -p 10250 192.168.9.91
# 为了节俭篇幅,残缺输入后果略, 理论后果与 kube-apiserver 的统一
# 能够执行过滤输入的扫描命令,如果以下命令返回值为空,阐明破绽修复
nmap --script ssl-enum-ciphers -p 10250 192.168.9.91 | grep SWEET32
留神:比照之前的破绽告警信息,扫描后果中曾经不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修复胜利。
常见问题
Etcd 启动失败
报错信息:
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Etcd Version: 3.4.13
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Git SHA: ae9734ed2
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Go Version: go1.12.17
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Go OS/Arch: linux/amd64
Feb 13 16:17:41 ks-k8s-master-0 Etcd: setting maximum number of CPUs to 4, total number of available CPUs is 4
Feb 13 16:17:41 ks-k8s-master-0 Etcd: the server is already initialized as member before, starting as Etcd member...
Feb 13 16:17:41 ks-k8s-master-0 Etcd: unexpected TLS cipher suite "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
Feb 13 16:17:42 ks-k8s-master-0 systemd: Etcd.service: main process exited, code=exited, status=1/FAILURE
Feb 13 16:17:42 ks-k8s-master-0 systemd: Failed to start Etcd.
Feb 13 16:17:42 ks-k8s-master-0 systemd: Unit Etcd.service entered failed state.
Feb 13 16:17:42 ks-k8s-master-0 systemd: Etcd.service failed.
解决方案:
删除配置文件中的 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
字段,至于起因没有深入研究。
Etcd 服务一直重启
报错信息 (省略掉了一部分):
批改配置文件后,重新启动 Etcd,启动的时候命令执行没有报错。然而,启动后查看 status 有异样,且 /var/log/messages
中有如下信息
Feb 13 16:25:55 ks-k8s-master-0 systemd: Etcd.service holdoff time over, scheduling restart.
Feb 13 16:25:55 ks-k8s-master-0 systemd: Stopped Etcd.
Feb 13 16:25:55 ks-k8s-master-0 systemd: Starting Etcd...
Feb 13 16:25:55 ks-k8s-master-0 Etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://192.168.9.91:2379
Feb 13 16:25:55 ks-k8s-master-0 Etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Feb 13 16:25:55 ks-k8s-master-0 Etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Feb 13 16:25:55 ks-k8s-master-0 Etcd: recognized and used environment variable ETCD_AUTO_COMPACTION_RETENTION=8
.....(省略)Feb 13 16:25:58 ks-k8s-master-0 systemd: Started Etcd.
Feb 13 16:25:58 ks-k8s-master-0 Etcd: serving client requests on 192.168.9.91:2379
Feb 13 16:25:58 ks-k8s-master-0 Etcd: serving client requests on 127.0.0.1:2379
Feb 13 16:25:58 ks-k8s-master-0 Etcd: accept tcp 127.0.0.1:2379: use of closed network connection
Feb 13 16:25:58 ks-k8s-master-0 systemd: Etcd.service: main process exited, code=exited, status=1/FAILURE
Feb 13 16:25:58 ks-k8s-master-0 systemd: Unit Etcd.service entered failed state.
Feb 13 16:25:58 ks-k8s-master-0 systemd: Etcd.service failed.
解决方案:
在理论测试中遇到了两种场景都产生了相似下面的报错信息:
第一种,在多节点 Etcd 环境中,须要先批改所有节点的 Etcd 配置文件,而后,同时重启所有节点的 Etcd 服务。
第二种,Etc Cipher 参数程序问题,一直尝试确认了最终程序后(具体配置参考注释),重复重启的问题没有再现。
本文由博客一文多发平台 OpenWrite 公布!