共计 10783 个字符,预计需要花费 27 分钟才能阅读完成。
引子:
明天一个小伙伴问我 kuberntes 集群中 kubectl get csr 怎么没有输入呢?我试了一下我集群内的确没有 csr 的。what is csr? 为什么 kubectl get csr 肯定要有输入呢?什么时候会有 csr 呢(这里说的是零碎默认的,不包含本人创立的!)
1. Kubernetes 对于 CSR
1. 什么是 CSR?
csr 的全称是 CertificateSigningRequest 翻译过去就是证书签名申请。具体的定义清参照 kubernetes 官网文档:https://kubernetes.io/zh/docs/reference/access-authn-authz/certificate-signing-requests/
2.kubernetes 集群肯定要有 CSR 吗?什么时候有 CSR 呢?
注:这里所说的 csr 都是默认的并不包含手动创立的
参照官网文档:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join/
看文档得出的论断是:kubernetes 集群在 join 的过程中是会产生 csr 的!
首先明确一下 kubernetes 集群的 join(即 work 节点退出 kubernetes 集群)对于 join 咱们罕用的 办法有一下两种:
1. 应用共享令牌和 API 服务器的 IP 地址,格局如下:
kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
2. 然而要强调一点退出集群并不是只有这一种形式:
你能够提供一个文件 – 规范 kubeconfig 文件的一个子集。该文件能够是本地文件,也能够通过 HTTPS URL 下载:
kubeadm join--discovery-file path/to/file.conf 或者 kubeadm join --discovery-file https://url/file.conf
个别用户退出集群都是用的第一种形式。第二种形式我明天看文档才看到 …. 就记录一下
3. 总结一下
在 kubernetes join 的过程中会产生 csr。上面体验一下
2. 实在环境演示 csr 的产生
两台 rocky8.5 为例。只做简略演示,不用于生产环境(零碎都没有优化,只是简略跑一下 join)
很多货色都只是为了跑一下测试!
主机名 | ip | master or work |
---|---|---|
k8s-master-01 | 10.0.4.2 | master 节点 |
k8s-work-01 | 10.0.4.36 | work 节点 |
1. rocky 应用 kubeadm 搭建简略 kubernetes 集群
1. 一些零碎的简略设置
降级内核其余优化我就不去做了只是想演示一下 csr!只做一下 update
[root@k8s-master-01 ~]# yum update -y
and
[root@k8s-work-01 ~]# yum update -y
减少 yum 源,敞开防火墙 selinux,设置 br_netfilter 等
### 增加 Kubernetes 阿里云源
[root@k8s-master-01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#敞开防火墙
[root@k8s-master-01 ~]# systemctl stop firewalld
[root@k8s-master-01 ~]# systemctl disable firewalld
# 敞开 selinux
[root@k8s-master-01 ~]# sed -ie 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@k8s-master-01 ~]# setenforce 0
# 容许 iptables 查看桥接流量
[root@k8s-master-01 ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
[root@k8s-master-01 ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master-01 ~]# sudo sysctl --system
配图就拿 k8s-work-01 节点的操作了。两台服务器都要做这些零碎的设置
2. 装置并配置 containerd
1. install contarinerd
[root@k8s-work-01 ~]# dnf install dnf-utils device-mapper-persistent-data lvm2
[root@k8s-work-01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Failed to set locale, defaulting to C.UTF-8
Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-work-01 ~]# sudo yum update -y && sudo yum install -y containerd.io
2. 生成配置文件并批改 sandbox_image 仓库地址
生成 配置文件 并批改 pause 仓库 为阿里云镜像仓库地址
[root@k8s-work-01 yum.repos.d]# containerd config default > /etc/containerd/config.toml
批改 pause 容器 image registry.aliyuncs.com/google_containers
3. 从新加加载服务
[root@k8s-work-01 yum.repos.d]# systemctl daemon-reload
[root@k8s-work-01 yum.repos.d]# systemctl restart containerd
[root@k8s-work-01 yum.repos.d]# systemctl status containerd
4. 配置 CRI 客户端 crictl
https://github.com/kubernetes-sigs/cri-tools/releases/
[root@k8s-master-01 ~]# VERSION="v1.23.0"
[root@k8s-master-01 ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
[root@k8s-master-01 ~]# sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
crictl
[root@k8s-master-01 ~]# rm -f crictl-$VERSION-linux-amd64.tar.gz
[root@k8s-work-01 ~]# cat <<EOF > /etc/crictl.yaml
> runtime-endpoint: unix:///run/containerd/containerd.sock
> image-endpoint: unix:///run/containerd/containerd.sock
> timeout: 10
> debug: false
> EOF
[root@k8s-work-01 ~]# crictl pull nginx:alpine
Image is up to date for sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
[root@k8s-work-01 ~]# crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/library/nginx alpine 51696c87e77e4 10.2MB
注:依然是两台 server 都装置。crictl 可能被墙下载较慢。本人找其余形式解决,不做过探讨!
3. 装置 Kubeadm
# 查看所有可装置版本
# yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 装置版本用上面的命令(装置了最新的 1.23.5 版本),能够指定版本装置
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
批改 kubelet 配置
vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS= --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
[root@k8s-master-01 ~]# systemctl enable kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
4. k8s-master-01 初始化
1. 生成配置文件
[root@k8s-master-01 ~]# kubeadm config print init-defaults > config.yaml
2. 批改配置文件
advertiseAddress 批改为 k8s-master-01ip
criSocket: /run/containerd/containerd.sock 注:默认的还是 docker
imageRepository: registry.aliyuncs.com/google_containers 注:默认是 gcr 的仓库
kubernetesVersion: 1.23.5 注:默认的是 1.23.0 间接批改了
podSubnet: 10.244.0.0/16 注:m 默认没有,增加一下与 flannel 对应不想批改 flannel 了
注: name:node 应该批改为主机名否则前面 master 节点名称显示为 node!
3. kubeadm int
[root@k8s-master-01 ~]# kubeadm init --config=config.yaml
产生如下报错:/proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
两台 server 进行如下操作:
[root@k8s-master-01 ~]# modprobe br_netfilter
[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
继续执行 init
[root@k8s-master-01 ~]# kubeadm init --config=config.yaml
[root@k8s-master-01 ~]# mkdir -p $HOME/.kube
[root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-work-01 NotReady <none> 46s v1.23.5
5. work 节点退出集群
[root@k8s-work-01 ~]# kubeadm join 10.0.4.2:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:abdffa455bed6eeda802563b826d042e6e855b30d2f2dbc9b6e0cd4515dfe1e2
[root@k8s-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-work-01 NotReady <none> 96m v1.23.5
node NotReady control-plane,master 96m v1.23.5
注:为什么 master 节点显示 node 后面曾经说过了
6. 装置网络插件 flannel
[root@k8s-master-01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master-01 ~]# kubectl apply -f kube-flannel.yml
flannel 配置文件中 Network 默认网络地址为 10.244.0.0/16 kuberentes init 的配置文件中 podSubnet 两个要统一!
[root@k8s-master-01 ~]# kubectl get pods -n kube-system
7. 验证一下是否有 csr 产生:
[root@k8s-master-01 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-8jq54 8m30s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:abcdef <none> Approved,Issued
csr-qpsst 8m54s kubernetes.io/kube-apiserver-client-kubelet system:node:node <none> Approved,Issued
查看是有 age 生命周期的!
2. 现有的 kubeadm 集群扩容情况下 csr 产生
找了一个现有集群腾讯云联网环境下搭建 kubernetes 集群:
[root@sh-master-01 ~]# kubectl get csr
No resources found
[root@sh-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
bj-work-01 NotReady <none> 130d v1.21.3
sh-master-01 Ready control-plane,master 130d v1.21.3
sh-master-02 Ready control-plane,master 130d v1.21.3
sh-master-03 Ready control-plane,master 130d v1.21.3
sh-work-01 Ready <none> 130d v1.21.3
sh-work-02 Ready <none> 130d v1.21.3
筹备这样操作:将 sh-work-02 节点踢出集群,在 master 节点生成外部令牌和 SH256 执行加密字符串,sh-work-02 重新加入集群
参照:Kubernetes 集群扩容,好多年前写的文章了,间接一波梭哈了
[root@sh-master-01 ~]# kubectl delete nodes sh-work-02
node "sh-work-02" deleted
[root@sh-master-01 ~]# kubeadm token create
zu1fum.7wzn5rnj5kiz62mu
[root@sh-master-01 ~]#
[root@sh-master-01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3
[root@sh-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
bj-work-01 NotReady <none> 130d v1.21.3
sh-master-01 Ready control-plane,master 130d v1.21.3
sh-master-02 Ready control-plane,master 130d v1.21.3
sh-master-03 Ready control-plane,master 130d v1.21.3
sh-work-01 Ready <none> 130d v1.21.3
sh-work-02 节点操作:
[root@sh-work-02 ~]# kubeadm reset
[root@sh-work-02 ~]# reboot
[root@sh-work-02 ~]# kubeadm join 10.10.2.4:6443 --token zu1fum.7wzn5rnj5kiz62mu --discovery-token-ca-cert-hash sha256:ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3
sh-master-01 节点:
[root@sh-master-01 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-lz6wl 97s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:zu1fum Approved,Issued
3. 创立一个 tls 证书并且批准证书签名申请
参照官方网站:治理集群中的 TLS 认证
[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master-01 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@k8s-master-01 ~]# mv cfssl_linux-amd64 /usr/bin/cfssl
[root@k8s-master-01 ~]# mv cfssljson_linux-amd64 /usr/bin/cfssljson
[root@k8s-master-01 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[root@k8s-master-01 ~]# cfssl version
创立所须要的 pod service namespace
[root@k8s-master-01 ~]# kubectl create ns my-namespace
namespace/my-namespace created
[root@k8s-master-01 ~]# kubectl run my-pod --image=nginx -n my-namespace
pod/my-pod created
kubectl apply -f service.yaml
[root@k8s-master-01 ~]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: my-namespace
labels:
run: my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: my-pod
[root@k8s-master-01 ~]# kubectl get all -n my-namespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/my-pod 1/1 Running 0 51m 10.244.1.2 k8s-work-01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/my-svc ClusterIP 10.109.248.68 <none> 80/TCP 7m6s run=my-pod
创立证书签名申请
service pod 对应域名与 ip 替换
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"10.244.1.2",
"10.109.248.68"
],
"CN": "system:node:my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{"O": "system:nodes"}
]
}
EOF
创立证书签名申请对象发送到 Kubernetes API
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kubelet-serving
usages:
- digital signature
- key encipherment
- server auth
EOF
[root@k8s-master-01 ~]# kubectl describe csr my-svc.my-namespace
批准证书签名申请
[root@k8s-master-01 ~]# kubectl certificate approve my-svc.my-namespace
certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved
[root@k8s-master-01 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
my-svc.my-namespace 87s kubernetes.io/kubelet-serving kubernetes-admin <none> Approved,Issue
下载证书并应用它
[root@k8s-master-01 key]# kubectl get csr
[root@k8s-master-01 key]# kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \
| base64 --decode > server.crt
将 server.crt 和 server-key.pem 作为键值对来启动 HTTPS 服务器? 没有此利用场景 ……. 外部证书当初都是剥离的。Kubernetes 1.20.5 装置 traefik 在腾讯云下的实际挂载在 slb 上 …… 所以这场景对我无用!
总结:
- 在 rocky 环境下搭建了一下 kubeadm 1.23
- 看了一遍 csr 的产生。并不是 Kubectl get csr 没有输入就是异样
- 体验了一下外部 tls 证书的证书签名与批准(尽管理论环境没有应用)
- 理解了 kubectl join 的另外一种形式 kubeadm join–discovery-file,讲真之前没有认真看过。
- 当然还有创立用户鉴权这里也能够应用 csr 形式 https://kubernetes.io/zh/docs/reference/access-authn-authz/certificate-signing-requests/#authorization。我之前是这样做的Kubernetes 之 kuberconfig– 普通用户受权 kubernetes 集群。