引子:
明天一个小伙伴问我kuberntes集群中kubectl get csr怎么没有输入呢? 我试了一下我集群内的确没有csr的。what is csr?为什么kubectl get csr肯定要有输入呢?什么时候会有csr呢(这里说的是零碎默认的,不包含本人创立的!)
1. Kubernetes 对于CSR
1.什么是CSR?
csr 的全称是 CertificateSigningRequest 翻译过去就是证书签名申请。具体的定义清参照kubernetes官网文档:https://kubernetes.io/zh/docs/reference/access-authn-authz/certificate-signing-requests/
2.kubernetes集群肯定要有CSR吗?什么时候有CSR呢?
注:这里所说的csr都是默认的并不包含手动创立的
参照官网文档:https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join/
看文档得出的论断是: kubernetes 集群在join的过程中是会产生csr的!
首先明确一下kubernetes集群的join(即work节点退出kubernetes集群)对于join咱们罕用的办法有一下两种:
1.应用共享令牌和 API 服务器的 IP 地址,格局如下:
kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
2. 然而要强调一点退出集群并不是只有这一种形式:
你能够提供一个文件 - 规范 kubeconfig 文件的一个子集。 该文件能够是本地文件,也能够通过 HTTPS URL 下载:
kubeadm join--discovery-file path/to/file.conf 或者kubeadm join --discovery-file https://url/file.conf
个别用户退出集群都是用的第一种形式。第二种形式我明天看文档才看到....就记录一下
3.总结一下
在kubernetes join的过程中会产生csr。上面体验一下
2. 实在环境演示csr的产生
两台rocky8.5为例。只做简略演示,不用于生产环境(零碎都没有优化,只是简略跑一下join)
很多货色都只是为了跑一下测试!
主机名 | ip | master or work |
---|---|---|
k8s-master-01 | 10.0.4.2 | master节点 |
k8s-work-01 | 10.0.4.36 | work节点 |
1. rocky应用kubeadm搭建简略kubernetes集群
1. 一些零碎的简略设置
降级内核其余优化我就不去做了只是想演示一下csr!只做一下update
[root@k8s-master-01 ~]# yum update -yand[root@k8s-work-01 ~]# yum update -y
减少yum源,敞开防火墙selinux,设置br_netfilter等
###增加Kubernetes阿里云源[root@k8s-master-01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF#敞开防火墙[root@k8s-master-01 ~]# systemctl stop firewalld[root@k8s-master-01 ~]# systemctl disable firewalld # 敞开selinux[root@k8s-master-01 ~]# sed -ie 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config[root@k8s-master-01 ~]# setenforce 0# 容许 iptables 查看桥接流量[root@k8s-master-01 ~]# cat <<EOF | sudo tee /etc/modules-load.d/k8s.confbr_netfilterEOF[root@k8s-master-01 ~]# cat <<EOF | sudo tee /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@k8s-master-01 ~]# sudo sysctl --system
配图就拿k8s-work-01节点的操作了。两台服务器都要做这些零碎的设置
2. 装置并配置containerd
1. install contarinerd
[root@k8s-work-01 ~]# dnf install dnf-utils device-mapper-persistent-data lvm2[root@k8s-work-01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoFailed to set locale, defaulting to C.UTF-8Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@k8s-work-01 ~]# sudo yum update -y && sudo yum install -y containerd.io
2. 生成配置文件并批改sandbox_image仓库地址
生成配置文件并批改pause仓库为阿里云镜像仓库地址
[root@k8s-work-01 yum.repos.d]# containerd config default > /etc/containerd/config.toml
批改pause容器image registry.aliyuncs.com/google_containers
3. 从新加加载服务
[root@k8s-work-01 yum.repos.d]# systemctl daemon-reload[root@k8s-work-01 yum.repos.d]# systemctl restart containerd[root@k8s-work-01 yum.repos.d]# systemctl status containerd
4. 配置 CRI 客户端 crictl
https://github.com/kubernetes-sigs/cri-tools/releases/
[root@k8s-master-01 ~]# VERSION="v1.23.0"[root@k8s-master-01 ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz[root@k8s-master-01 ~]# sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bincrictl[root@k8s-master-01 ~]# rm -f crictl-$VERSION-linux-amd64.tar.gz
[root@k8s-work-01 ~]# cat <<EOF > /etc/crictl.yaml> runtime-endpoint: unix:///run/containerd/containerd.sock> image-endpoint: unix:///run/containerd/containerd.sock> timeout: 10> debug: false> EOF[root@k8s-work-01 ~]# crictl pull nginx:alpineImage is up to date for sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502[root@k8s-work-01 ~]# crictl imagesIMAGE TAG IMAGE ID SIZEdocker.io/library/nginx alpine 51696c87e77e4 10.2MB
注:依然是两台server都装置。crictl可能被墙下载较慢。本人找其余形式解决,不做过探讨!
3. 装置 Kubeadm
# 查看所有可装置版本 # yum list --showduplicates kubeadm --disableexcludes=kubernetes# 装置版本用上面的命令(装置了最新的1.23.5版本),能够指定版本装置# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
批改kubelet配置
vi /etc/sysconfig/kubeletKUBELET_EXTRA_ARGS= --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
[root@k8s-master-01 ~]# systemctl enable kubelet.serviceCreated symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
4. k8s-master-01初始化
1. 生成配置文件
[root@k8s-master-01 ~]# kubeadm config print init-defaults > config.yaml
2. 批改配置文件
advertiseAddress 批改为k8s-master-01ip
criSocket: /run/containerd/containerd.sock 注:默认的还是docker
imageRepository: registry.aliyuncs.com/google_containers 注:默认是gcr的仓库
kubernetesVersion: 1.23.5 注:默认的是1.23.0 间接批改了
podSubnet: 10.244.0.0/16 注:m默认没有,增加一下与flannel对应不想批改flannel了
注: name:node应该批改为主机名否则前面master节点名称显示为node!
3. kubeadm int
[root@k8s-master-01 ~]# kubeadm init --config=config.yaml
产生如下报错: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
两台server进行如下操作:
[root@k8s-master-01 ~]# modprobe br_netfilter[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables[root@k8s-master-01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
继续执行init
[root@k8s-master-01 ~]# kubeadm init --config=config.yaml
[root@k8s-master-01 ~]# mkdir -p $HOME/.kube[root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master-01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-work-01 NotReady <none> 46s v1.23.5
5. work节点退出集群
[root@k8s-work-01 ~]# kubeadm join 10.0.4.2:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:abdffa455bed6eeda802563b826d042e6e855b30d2f2dbc9b6e0cd4515dfe1e2
[root@k8s-master-01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-work-01 NotReady <none> 96m v1.23.5node NotReady control-plane,master 96m v1.23.5
注:为什么master节点显示node后面曾经说过了
6. 装置网络插件flannel
[root@k8s-master-01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml[root@k8s-master-01 ~]# kubectl apply -f kube-flannel.yml
flannel配置文件中Network默认网络地址为10.244.0.0/16 kuberentes init的配置文件中podSubnet两个要统一!
[root@k8s-master-01 ~]# kubectl get pods -n kube-system
7. 验证一下是否有csr产生:
[root@k8s-master-01 ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITIONcsr-8jq54 8m30s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:abcdef <none> Approved,Issuedcsr-qpsst 8m54s kubernetes.io/kube-apiserver-client-kubelet system:node:node <none> Approved,Issued
查看是有age生命周期的!
2.现有的kubeadm集群扩容情况下csr产生
找了一个现有集群腾讯云联网环境下搭建kubernetes集群:
[root@sh-master-01 ~]# kubectl get csrNo resources found[root@sh-master-01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONbj-work-01 NotReady <none> 130d v1.21.3sh-master-01 Ready control-plane,master 130d v1.21.3sh-master-02 Ready control-plane,master 130d v1.21.3sh-master-03 Ready control-plane,master 130d v1.21.3sh-work-01 Ready <none> 130d v1.21.3sh-work-02 Ready <none> 130d v1.21.3
筹备这样操作:将sh-work-02节点踢出集群,在master节点生成外部令牌和SH256执行加密字符串,sh-work-02重新加入集群
参照:Kubernetes集群扩容,好多年前写的文章了,间接一波梭哈了
[root@sh-master-01 ~]# kubectl delete nodes sh-work-02node "sh-work-02" deleted[root@sh-master-01 ~]# kubeadm token createzu1fum.7wzn5rnj5kiz62mu[root@sh-master-01 ~]# [root@sh-master-01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3[root@sh-master-01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONbj-work-01 NotReady <none> 130d v1.21.3sh-master-01 Ready control-plane,master 130d v1.21.3sh-master-02 Ready control-plane,master 130d v1.21.3sh-master-03 Ready control-plane,master 130d v1.21.3sh-work-01 Ready <none> 130d v1.21.3
sh-work-02节点操作:
[root@sh-work-02 ~]# kubeadm reset[root@sh-work-02 ~]# reboot[root@sh-work-02 ~]# kubeadm join 10.10.2.4:6443 --token zu1fum.7wzn5rnj5kiz62mu --discovery-token-ca-cert-hash sha256:ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3
sh-master-01节点:
[root@sh-master-01 ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR CONDITIONcsr-lz6wl 97s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:zu1fum Approved,Issued
3. 创立一个tls证书并且批准证书签名申请
参照官方网站:治理集群中的 TLS 认证
[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64[root@k8s-master-01 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64[root@k8s-master-01 ~]# mv cfssl_linux-amd64 /usr/bin/cfssl[root@k8s-master-01 ~]# mv cfssljson_linux-amd64 /usr/bin/cfssljson[root@k8s-master-01 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo[root@k8s-master-01 ~]# cfssl version
创立所须要的pod service namespace
[root@k8s-master-01 ~]# kubectl create ns my-namespacenamespace/my-namespace created[root@k8s-master-01 ~]# kubectl run my-pod --image=nginx -n my-namespacepod/my-pod created
kubectl apply -f service.yaml
[root@k8s-master-01 ~]# cat service.yaml apiVersion: v1kind: Servicemetadata: name: my-svc namespace: my-namespace labels: run: my-podspec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: my-pod
[root@k8s-master-01 ~]# kubectl get all -n my-namespace -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/my-pod 1/1 Running 0 51m 10.244.1.2 k8s-work-01 <none> <none>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORservice/my-svc ClusterIP 10.109.248.68 <none> 80/TCP 7m6s run=my-pod
创立证书签名申请
service pod 对应域名与ip替换
cat <<EOF | cfssl genkey - | cfssljson -bare server{ "hosts": [ "my-svc.my-namespace.svc.cluster.local", "my-pod.my-namespace.pod.cluster.local", "10.244.1.2", "10.109.248.68" ], "CN": "system:node:my-pod.my-namespace.pod.cluster.local", "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "O": "system:nodes" } ]}EOF
创立证书签名申请对象发送到 Kubernetes API
cat <<EOF | kubectl apply -f -apiVersion: certificates.k8s.io/v1kind: CertificateSigningRequestmetadata: name: my-svc.my-namespacespec: request: $(cat server.csr | base64 | tr -d '\n') signerName: kubernetes.io/kubelet-serving usages: - digital signature - key encipherment - server authEOF
[root@k8s-master-01 ~]# kubectl describe csr my-svc.my-namespace
批准证书签名申请
[root@k8s-master-01 ~]# kubectl certificate approve my-svc.my-namespacecertificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved[root@k8s-master-01 ~]# kubectl get csrNAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITIONmy-svc.my-namespace 87s kubernetes.io/kubelet-serving kubernetes-admin <none> Approved,Issue
下载证书并应用它
[root@k8s-master-01 key]# kubectl get csr[root@k8s-master-01 key]# kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \ | base64 --decode > server.crt
将 server.crt 和 server-key.pem 作为键值对来启动 HTTPS 服务器?没有此利用场景.......外部证书当初都是剥离的。Kubernetes 1.20.5 装置traefik在腾讯云下的实际挂载在slb上......所以这场景对我无用!
总结:
- 在rocky环境下搭建了一下kubeadm 1.23
- 看了一遍csr的产生。并不是Kubectl get csr没有输入就是异样
- 体验了一下外部tls证书的证书签名与批准(尽管理论环境没有应用)
- 理解了kubectl join的另外一种形式kubeadm join--discovery-file ,讲真之前没有认真看过。
- 当然还有创立用户鉴权这里也能够应用csr形式https://kubernetes.io/zh/docs/reference/access-authn-authz/certificate-signing-requests/#authorization。我之前是这样做的Kubernetes之kuberconfig--普通用户受权kubernetes集群。