尽管网上有大量从零搭建 K8S 的文章,但大都针对老版本,若间接照搬去装置最新的 1.20 版本会遇到一堆问题。故此将我的装置步骤记录下来,心愿能为读者提供 copy and paste 式的集群搭建帮忙。
部署筹备工作
部署最小化 K8S 集群:master + node1 + node2
Ubuntu 是一款基于 Debian Linux 的以桌面利用为主的操作系统,内容涵盖文字处理、电子邮件、软件开发工具和 Web 服务等,可供用户收费下载、应用和分享。
➜ vgsCurrent machine states:master running (virtualbox)node1 running (virtualbox)node2 running (virtualbox)
根底环境信息
设置零碎主机名以及 Host 文件各节点之间的互相解析
- 应用这个的 Vagrantfile 启动的三节点服务曾经配置好了
- 以下应用 master 节点进行演示查看,其余节点操作均统一
# hostnamectlvagrant@k8s-master:~$ hostnamectl Static hostname: k8s-master# hostsvagrant@k8s-master:~$ cat /etc/hosts127.0.0.1 localhost127.0.1.1 vagrant.vm vagrant192.168.30.30 k8s-master192.168.30.31 k8s-node1192.168.30.32 k8s-node2# pingvagrant@k8s-master:~$ ping k8s-node1PING k8s-node1 (192.168.30.31) 56(84) bytes of data.64 bytes from k8s-node1 (192.168.30.31): icmp_seq=1 ttl=64 time=0.689 ms
阿里源配置
配置 Ubuntu 的阿里源来减速装置速度,阿里源镜像地址:https://developer.aliyun.com/...
# 登录服务器➜ vgssh master/node1/nod2Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-50-generic x86_64)# 设置阿里云Ubuntu镜像$ sudo cp /etc/apt/sources.list{,.bak}$ sudo vim /etc/apt/sources.list# 配置kubeadm的阿里云镜像源$ sudo vim /etc/apt/sources.listdeb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main$ sudo gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB$ sudo gpg --export --armor BA07F4FB | sudo apt-key add -# 配置docker装置$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -$ sudo apt-key fingerprint 0EBFCD88$ sudo vim /etc/apt/sources.listdeb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable# 更新仓库$ sudo apt update$ sudo apt dist-upgrade
根底工具装置
部署阶段的根底工具装置
- 根底组件 docker
- 部署工具 kubeadm
- 路由规定 ipvsadm
- 工夫同步 ntp
# 根底工具装置$ sudo apt install -y docker-ce docker-ce-cli containerd.io kubeadm ipvsadm ntp ntpdate nginx supervisor# 将以后普通用户退出docker组(需从新登录)$ sudo usermod -a -G docker $USER# 服务启用$ sudo systemctl enable docker.service$ sudo systemctl start docker.service$ sudo systemctl enable kubelet.service$ sudo systemctl start kubelet.service
操作系统配置
操作系统相干配置
- 敞开缓存
- 配置内核参数
- 调整零碎时区
- 降级内核版本(默认为4.15.0的版本)
# 敞开缓存$ sudo swapoff -a# 为K8S来调整内核参数$ sudo touch /etc/sysctl.d/kubernetes.conf$ sudo cat > /etc/sysctl.d/kubernetes.conf <<EOFnet.bridge.bridge-nf-call-iptables = 1 # 开启网桥模式(必须)net.bridge.bridge-nf-call-ip6tables = 1 # 开启网桥模式(必须)net.ipv6.conf.all.disable_ipv6 = 1 # 敞开IPv6协定(必须)net.ipv4.ip_forward = 1 # 转发模式(默认开启)vm.panic_on_oom=0 # 开启OOM(默认开启)vm.swappiness = 0 # 禁止应用swap空间vm.overcommit_memory=1 # 不查看物理内存是否够用fs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max = 52706963 # 设置文件句柄数量fs.nr_open = 52706963 # 设置文件的最大关上数量net.netfilter.nf_conntrack_max = 2310720EOF# 查看零碎内核参数的形式$ sudo sysctl -a | grep xxx# 使内核参数配置文件失效$ sudo sysctl -p /etc/sysctl.d/kubernetes.confbash# 设置零碎时区为中国/上海$ sudo timedatectl set-timezone Asia/Shanghai# 将以后的UTC工夫写入硬件时钟$ sudo timedatectl set-local-rtc 0
开启 ipvs 服务
开启 ipvs 服务
- kube-proxy 开启 ipvs 的前置条件
# 载入指定的个别模块$ modprobe br_netfilter# 批改配置$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipvEOF# 加载配置$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv
部署 Master 节点
节点最低配置: 2C+2G 内存;从节点资源尽量短缺。
kubeadm 工具的 init 命令,即可初始化以单节点部署的 master。为了防止翻墙,这里能够应用阿里云的谷歌源来代替。在执行 kubeadm 部署命令的时候,指定对应地址即可。当然,能够将其退出本地的镜像库之中,更易保护。
# 登录服务器➜ vgssh masterWelcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-50-generic x86_64)# 部署节点(命令行)# 留神pod和service的地址须要不同(否则会报错)$ sudo kubeadm init --kubernetes-version=1.20.2 --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.30.30 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.245.0.0/16# 部署镜像配置(配置文件)$ sudo kubeadm init --config ./kubeadm-config.yamlYour Kubernetes control-plane has initialized successfully!
# 查看IP段是否失效(iptable)$ ip route show10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.110.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink# # 查看IP段是否失效(ipvs)$ ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
配置文件定义
- 接口应用了 v1beta2 版本
- 配置主节点 IP 地址为 192.168.30.30
- 为 flannel 调配的是 10.244.0.0/16 网段
- 抉择的 kubernetes 是以后最新的 1.20.2 版本
- 退出了 controllerManager 的程度扩容性能
# kubeadm-config.yaml# sudo kubeadm config print init-defaults > kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2imageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.20.2apiServer: extraArgs: advertise-address: 192.168.30.30networking: podSubnet: 10.244.0.0/16controllerManager: ExtraArgs: horizontal-pod-autoscaler-use-rest-clients: "true" horizontal-pod-autoscaler-sync-period: "10s" node-monitor-grace-period: "10s"
执行胜利之后会输入如下信息,须要装置如下步骤操作下:
- 第一步 在 kubectl 默认管制和操作集群节点的时候,须要应用到 CA 的密钥,传输过程是通过 TLS 协定保障通信的安全性。通过上面 3 行命令拷贝密钥信息到以后用户家目录下,这样 kubectl 执行时会首先拜访 .kube 目录,应用这些受权信息拜访集群。
- 第二步 之后增加 worker 节点时,要通过 token 能力保障安全性。因而,先把显示的这行命令保留下来,以备后续应用会用到。
# master setting step oneTo start cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf# master setting step twoYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed: https://kubernetes.io/docs/concepts/cluster-administration/addons/Join any number of worker nodes by running the following on each as root:kubeadm join 192.168.30.30:6443 --token lebbdi.p9lzoy2a16tmr6hq --discovery-token-ca-cert-hash sha256:6c79fd83825d7b2b0c3bed9e10c428acf8ffcd615a1d7b258e9b500848c20cae
将子节点退出主节点中
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master NotReady control-plane,master 62m v1.20.2k8s-node1 NotReady <none> 82m v1.20.2k8s-node2 NotReady <none> 82m v1.20.2
# 查看token令牌$ sudo kubeadm token list# 生成token令牌$ sudo kubeadm token create# 遗记sha编码$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
# 生成一个新的 token 令牌(比下面的不便)$ kubeadm token generate# 间接生成 join 命令(比下面的不便)$ kubeadm token create <token_generate> --print-join-command --ttl=0
执行实现之后能够通过如下命令,查看主节点信息。
- 默认生成四个命名空间
- default、kube-system、kube-public、kube-node-lease
- 部署的外围服务有以下几个(kube-system)
- coredns、etcd
- kube-apiserver、kube-scheduler
- kube-controller-manager、kube-controller-manager
- 此时 master 并没有 ready 状态(须要装置网络插件)
- 下一章节中,咱们将装置 flannel 这个网络插件
# 命名空间$ kubectl get namespaceNAME STATUS AGEdefault Active 19mkube-node-lease Active 19mkube-public Active 19mkube-system Active 19m# 外围服务$ kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7f89b7bc75-bh42f 1/1 Running 0 19mcoredns-7f89b7bc75-dvzpl 1/1 Running 0 19metcd-k8s-master 1/1 Running 0 19mkube-apiserver-k8s-master 1/1 Running 0 19mkube-controller-manager-k8s-master 1/1 Running 0 19mkube-proxy-5rlpv 1/1 Running 0 19mkube-scheduler-k8s-master 1/1 Running 0 19m
部署 flannel 网络
网络服务用于治理 K8S 集群中的服务网络
flannel 网络须要指定 IP 地址段,即上一步中通过编排文件设置的 10.244.0.0/16。其实能够通过 flannel 官网和 HELM 工具间接部署服务,然而原地址是须要搭梯子的。所以,能够将其内容保留在如下配置文件中,批改对应镜像地址。
部署 flannel 服务
# 部署flannel服务# 1.批改镜像地址(如果下载不了的话)# 2.批改Network为--pod-network-cidr的参数IP段$ kubectl apply -f ./kube-flannel.yml# 如果部署呈现问题可通过如下命令查看日志$ kubectl logs kube-flannel-ds-6xxs5 --namespace=kube-system$ kubectl describe pod kube-flannel-ds-6xxs5 --namespace=kube-system
如果应用当中存在问题的,能够参考:官网的问题手册(https://github.com/coreos/fla...。
- 因为咱们这里应用的是 Vagrant 虚构进去的机器进行 K8S 的部署,然而在运行对应 yaml 配置的时候,会报错。通过查看日志发现是因为默认绑定的是虚拟机下面的 eth0 这块网卡,而这块网卡是 Vagrant 应用的,咱们应该绑定的是 eth1 才对。
- Vagrant 通常为所有 VM 调配两个接口,第一个为所有主机调配的 IP 地址为 10.0.2.15,用于取得 NAT 的内部流量。这样会导致 flannel 部署存在问题。通过官网问题阐明,咱们能够应用 --iface=eth1 这个参数抉择第二个网卡。
- 对应的参数应用形式,能够参考 flannel use –iface=eth1 中的答复自行添加,而这里我间接批改了启动的配置文件,在启动服务的时候通过 args 批改了,如下所示。
$ kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-7f89b7bc75-bh42f 1/1 Running 0 61mcoredns-7f89b7bc75-dvzpl 1/1 Running 0 61metcd-k8s-master 1/1 Running 0 62mkube-apiserver-k8s-master 1/1 Running 0 62mkube-controller-manager-k8s-master 1/1 Running 0 62mkube-flannel-ds-zl148 1/1 Running 0 44skube-flannel-ds-ll523 1/1 Running 0 44skube-flannel-ds-wpmhw 1/1 Running 0 44skube-proxy-5rlpv 1/1 Running 0 61mkube-scheduler-k8s-master 1/1 Running 0 62m
配置文件如下所示
---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ["NET_ADMIN", "NET_RAW"] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: "RunAsAny"---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: flannelrules: - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] verbs: ["use"] resourceNames: ["psp.flannel.unprivileged"] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: flannelroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannelsubjects: - kind: ServiceAccount name: flannel namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: flannel namespace: kube-system---kind: ConfigMapapiVersion: v1metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flanneldata: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }---apiVersion: apps/v1kind: DaemonSetmetadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannelspec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.13.1-rc1 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.1-rc1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
至此集群部署胜利!如果有参数谬误须要批改,你也能够在 reset 后从新 init 集群。
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master Ready control-plane,master 62m v1.20.2k8s-node1 Ready control-plane,master 82m v1.20.2k8s-node2 Ready control-plane,master 82m v1.20.2# 重启集群$ sudo kubeadm reset$ sudo kubeadm init
部署 dashboard 服务
以 WEB 页面的可视化 dashboard 来监控集群的状态
这个还是会遇到须要搭梯子下载启动配置文件的问题,上面是对应的下载地址,能够下载之后上传到服务器下面在进行部署。
部署 dashboard 服务
# 部署flannel服务$ kubectl apply -f ./kube-dashboard.yaml# 如果部署呈现问题可通过如下命令查看日志$ kubectl logs kubernetes-dashboard-c9fb67ffc-nknpj --namespace=kubernetes-dashboard$ kubectl describe pod kubernetes-dashboard-c9fb67ffc-nknpj --namespace=kubernetes-dashboard
$ kubectl get svc -n kubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdashboard-metrics-scraper ClusterIP 10.245.214.11 <none> 8000/TCP 26skubernetes-dashboard ClusterIP 10.245.161.146 <none> 443/TCP 26s
须要留神的是 dashboard 默认不容许外网拜访,即便通过 kubectl proxy 容许外网拜访。但 dashboard 又只容许 HTTPS 拜访,这样 kubeadm init 时自签名的 CA 证书是不被浏览器抵赖的。
我采纳的计划是 Nginx 作为反向代理,应用 Lets Encrypt 提供的无效证书对外提供服务,再经由 proxy_pass 指令反向代理到 kubectl proxy 上,如下所示。此时,本地可经由 8888 拜访到 dashboard 服务,再通过 Nginx 拜访它。
# 代理(能够应用supervisor)$ kubectl proxy --accept-hosts='^*$'$ kubectl proxy --port=8888 --accept-hosts='^*$'# 测试代理是否失常(默认监听在8001端口上)$ curl -X GET -L http://localhost:8001# 本地(能够应用nginx)proxy_pass http://localhost:8001;proxy_pass http://localhost:8888;# 外网拜访如下URL地址https://mydomain/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
配置文件整顿
- nginx
- supervisor
# k8s.confclient_max_body_size 80M;client_body_buffer_size 128k;proxy_connect_timeout 600;proxy_read_timeout 600;proxy_send_timeout 600;server { listen 8080 ssl; server_name _; ssl_certificate /etc/kubernetes/pki/ca.crt; ssl_certificate_key /etc/kubernetes/pki/ca.key; access_log /var/log/nginx/k8s.access.log; error_log /var/log/nginx/k8s.error.log error; location / { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/; }}
# k8s.conf[program:k8s-master]command=kubectl proxy --accept-hosts='^*$'user=vagrantenvironment=KUBECONFIG="/home/vagrant/.kube/config"stopasgroup=truestopasgroup=trueautostart=trueautorestart=unexpectedstdout_logfile_maxbytes=1MBstdout_logfile_backups=10stderr_logfile_maxbytes=1MBstderr_logfile_backups=10stderr_logfile=/var/log/supervisor/k8s-stderr.logstdout_logfile=/var/log/supervisor/k8s-stdout.log
配置文件如下所示
# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.apiVersion: v1kind: Namespacemetadata: name: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard---kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboardtype: Opaque---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboardtype: Opaquedata: csrf: ""---apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboardtype: Opaque---kind: ConfigMapapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardrules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: [ "kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf", ] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: [ "heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper", ] verbs: ["get"]---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboardrules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboardsubjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboardspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-shanghai.aliyuncs.com/jieee/dashboard:v2.0.4 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule---kind: ServiceapiVersion: v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboardspec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper---kind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboardspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: "runtime/default" spec: containers: - name: dashboard-metrics-scraper image: registry.cn-shanghai.aliyuncs.com/jieee/metrics-scraper:v1.0.4 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
第一种:登录 dashboard 的形式(配置文件)
- 采纳 token 形式
- 采纳秘钥文件形式
# 创立管理员帐户(dashboard)$ cat <<EOF | kubectl apply -f -apiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboardEOF
# 将用户绑定曾经存在的集群管理员角色$ cat <<EOF | kubectl apply -f -apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboardEOF
# 获取可用户于拜访的token令牌$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
登录界面展现
- 针对 Chrome 浏览器,在空白处点击而后输出:thisisunsafe
- 针对 Firefox 浏览器,遇到证书过期,增加例外拜访。
第二种:受权 dashboard 权限(不实用配置文件)
- 如果登录之后提醒权限问题的话,能够执行如下操作
- 把 serviceaccount 绑定在 clusteradmin
- 受权 serviceaccount 用户具备整个集群的拜访管理权限
# 创立serviceaccount$ kubectl create serviceaccount dashboard-admin -n kube-system# 把serviceaccount绑定在clusteradmin# 受权serviceaccount用户具备整个集群的拜访管理权限$ kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin# 获取serviceaccount的secret信息,可失去token令牌的信息$ kubectl get secret -n kube-system# 通过上边命令获取到dashboard-admin-token-slfcr信息$ kubectl describe secret <dashboard-admin-token-slfcr> -n kube-system# 浏览器拜访登录并把token粘贴进去登录即可https://192.168.30.30:8080/# 快捷查看token的命令$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/admin/{print $1}')
作者: Escape
链接: https://www.escapelife.site/p...