共计 18134 个字符,预计需要花费 46 分钟才能阅读完成。
作者:老 Z,中电信数智科技有限公司山东分公司运维架构师,云原生爱好者,目前专一于云原生运维,云原生畛域技术栈波及 Kubernetes、KubeSphere、DevOps、OpenStack、Ansible 等。
KubeKey 是一个用于部署 K8s 集群的开源轻量级工具。
它提供了一种灵便、疾速、便捷的形式来仅装置 Kubernetes/K3s,或同时装置 K8s/K3s 和 KubeSphere,以及其余云原生插件。除此之外,它也是扩大和降级集群的无效工具。
KubeKey v2.1.0 版本新增了清单 (manifest) 和制品 (artifact) 的概念,为用户离线部署 K8s 集群提供了一种解决方案。
manifest 是一个形容以后 K8s 集群信息和定义 artifact 制品中须要蕴含哪些内容的文本文件。
在过来,用户须要筹备部署工具,镜像 tar 包和其余相干的二进制文件,每位用户须要部署的 K8s 版本和须要部署的镜像都是不同的。当初应用 KubeKey,用户只需应用清单 manifest 文件来定义将要离线部署的集群环境须要的内容,再通过该 manifest 来导出制品 artifact 文件即可实现筹备工作。离线部署时只须要 KubeKey 和 artifact 就可疾速、简略的在环境中部署镜像仓库和 K8s 集群。
KubeKey 生成 manifest 文件有两种形式。
- 利用现有运行中的集群作为源生成 manifest 文件,也是官网举荐的一种形式,具体参考 KubeSphere 官网的离线部署文档。
- 依据 模板文件 手动编写 manifest 文件。
第一种形式的益处是能够构建 1:1 的运行环境,然而须要提前部署一个集群,不够灵便度,并不是所有人都具备这种条件的。
因而,本文参考官网的离线文档,采纳手写 manifest 文件的形式,实现离线环境的装置部署。
本文知识点
- 定级: 入门级
- 理解清单 (manifest) 和制品 (artifact) 的概念
- 把握 manifest 清单的编写办法
- 依据 manifest 清单制作 artifact
- 离线部署 KubeSphere 和 Kubernetes
演示服务器配置
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用处 |
---|---|---|---|---|---|---|
zdeops-master | 192.168.9.9 | 2 | 4 | 40 | 200 | Ansible 运维管制节点 |
ks-k8s-master-0 | 192.168.9.91 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
ks-k8s-master-1 | 192.168.9.92 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
ks-k8s-master-2 | 192.168.9.93 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker/Ceph |
es-node-0 | 192.168.9.95 | 2 | 8 | 40 | 200 | ElasticSearch |
es-node-1 | 192.168.9.96 | 2 | 8 | 40 | 200 | ElasticSearch |
es-node-2 | 192.168.9.97 | 2 | 8 | 40 | 200 | ElasticSearch |
harbor | 192.168.9.89 | 2 | 8 | 40 | 200 | Harbor |
共计 | 8 | 22 | 84 | 320 | 2200 |
演示环境波及软件版本信息
- 操作系统:CentOS-7.9-x86_64
- KubeSphere:3.3.0
- Kubernetes:1.24.1
- Kubekey:v2.2.1
- Ansible:2.8.20
- Harbor:2.5.1
离线部署资源制作
下载 KubeKey
# 在 zdevops-master 运维开发服务器执行 | |
# 抉择中文区下载 (拜访 github 受限时应用) | |
$ export KKZONE=cn | |
# 下载 KubeKey | |
$ mkdir /data/kubekey | |
$ cd /data/kubekey/ | |
$ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh - |
获取 manifest 模板
参考 https://github.com/kubesphere…
有两个参考用例,一个简略版,一个完整版。参考简略版就能够。
获取 ks-installer images-list
$ wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
文中的 image 列表选用的 Docker Hub 仓库其余组件寄存的公共仓库,国内倡议对立更改前缀为 registry.cn-beijing.aliyuncs.com/kubesphereio
批改后的残缺的镜像列表在上面的 manifest 文件中展现。
请留神,example-images 蕴含的 image 中只保留了 busybox,其余的在本文中没有应用。
获取操作系统依赖包
$ wget https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso
将该 ISO 文件放到制作离线镜像的服务器的 /data/kubekey 目录下
生成 manifest 文件
依据下面的文件及相干信息,生成最终 manifest.yaml。
命名为 ks-v3.3.0-manifest.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2 | |
kind: Manifest | |
metadata: | |
name: sample | |
spec: | |
arches: | |
- amd64 | |
operatingSystems: | |
- arch: amd64 | |
type: linux | |
id: centos | |
version: "7" | |
osImage: CentOS Linux 7 (Core) | |
repository: | |
iso: | |
localPath: "/data/kubekey/centos7-rpms-amd64.iso" | |
url: | |
kubernetesDistributions: | |
- type: kubernetes | |
version: v1.24.1 | |
components: | |
helm: | |
version: v3.6.3 | |
cni: | |
version: v0.9.1 | |
etcd: | |
version: v3.4.13 | |
containerRuntimes: | |
- type: containerd | |
version: 1.6.4 | |
crictl: | |
version: v1.24.0 | |
docker-registry: | |
version: "2" | |
harbor: | |
version: v2.4.1 | |
docker-compose: | |
version: v2.2.2 | |
images: | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.7 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.7 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.7 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.7 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.13 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.13 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.13 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.13 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 | |
- registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0 | |
registry: | |
auths: {} |
manifest 批改阐明
- 开启 harbor 和 docker-compose 配置项,为前面通过 KubeKey 自建 harbor 仓库推送镜像应用。
- 默认创立的 manifest 外面的镜像列表从 docker.io 获取,替换前缀为 registry.cn-beijing.aliyuncs.com/kubesphereio。
- 若须要导出的 artifact 文件中蕴含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址为 localPath,填写提前下载好的 ISO 包在本地的寄存门路,并将 url 配置项置空。
- 您能够拜访 https://github.com/kubesphere… 下载 ISO 文件。
导出制品 artifact
$ export KKZONE=cn | |
$ ./kk artifact export -m ks-v3.3.0-manifest.yaml -o kubesphere-v3.3.0-artifact.tar.gz |
制品 (artifact) 阐明
- 制品(artifact)是一个依据指定的 manifest 文件内容导出的蕴含镜像 tar 包和相干二进制文件的 tgz 包。
- 在 KubeKey 初始化镜像仓库、创立集群、增加节点和降级集群的命令中均可指定一个 artifact,KubeKey 将主动解包该 artifact 并在执行命令时间接应用解包进去的文件。
导出 Kubekey
$ tar zcvf kubekey-v2.2.1.tar.gz kk kubekey-v2.2.1-linux-amd64.tar.gz
K8s 服务器初始化配置
本节执行离线环境 K8s 服务器初始化配置。
Ansible hosts 配置
[k8s] | |
ks-k8s-master-0 ansible_ssh_host=192.168.9.91 host_name=ks-k8s-master-0 | |
ks-k8s-master-1 ansible_ssh_host=192.168.9.92 host_name=ks-k8s-master-1 | |
ks-k8s-master-2 ansible_ssh_host=192.168.9.93 host_name=ks-k8s-master-2 | |
[es] | |
es-node-0 ansible_ssh_host=192.168.9.95 host_name=es-node-0 | |
es-node-1 ansible_ssh_host=192.168.9.96 host_name=es-node-1 | |
es-node-2 ansible_ssh_host=192.168.9.97 host_name=es-node-2 | |
harbor ansible_ssh_host=192.168.9.89 host_name=harbor | |
[servers:children] | |
k8s | |
es | |
[servers:vars] | |
ansible_connection=paramiko | |
ansible_ssh_user=root | |
ansible_ssh_pass=F@ywwpTj4bJtYwzpwCqD |
检测服务器连通性
# 利用 ansible 检测服务器的连通性 | |
$ cd /data/ansible/ansible-zdevops/inventories/dev/ | |
$ source /opt/ansible2.8/bin/activate | |
$ ansible -m ping all |
初始化服务器配置
# 利用 ansible-playbook 初始化服务器配置 | |
$ ansible-playbook ../../playbooks/init-base.yaml -l k8s |
挂载数据盘
- 挂载第一块数据盘
# 利用 ansible-playbook 初始化主机数据盘 | |
# 留神 -e data_disk_path="/data" 指定挂载目录, 用于存储 Docker 容器数据 | |
$ ansible-playbook ../../playbooks/init-disk.yaml -e data_disk_path="/data" -l k8s |
- 挂载验证
# 利用 ansible 验证数据盘是否格式化并挂载 | |
$ ansible harbor -m shell -a 'df -h' | |
# 利用 ansible 验证数据盘是否配置主动挂载 | |
$ ansible harbor -m shell -a 'tail -1 /etc/fstab' |
装置 K8s 零碎依赖包
# 利用 ansible-playbook 装置 kubernetes 零碎依赖包 | |
# ansible-playbook 中设置了启用 GlusterFS 存储的开关,默认开启, 不须要的能够将参数设置为 False | |
$ ansible-playbook ../../playbooks/deploy-kubesphere.yaml -e k8s_storage_glusterfs=false -l k8s |
离线装置集群
传输离线部署资源到部署节点
将以下离线部署资源,传到部署节点 (通常是第一个 master 节点) 的 /data/kubekey 目录。
- Kubekey:kubekey-v2.2.1.tar.gz
- 制品 artifact:kubesphere-v3.3.0-artifact.tar.gz
执行以下操作,解压 kubekey。
$ cd /data/kubekey | |
$ tar xvf kubekey-v2.2.1.tar.gz |
创立离线集群配置文件
- 创立配置文件
$ ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.24.1 -f config-sample.yaml
- 批改配置文件
$ vim config-sample.yaml
批改内容阐明
- 依照理论离线环境配置批改节点信息。
- 按理论状况增加 registry 的相干信息。
apiVersion: kubekey.kubesphere.io/v1alpha2 | |
kind: Cluster | |
metadata: | |
name: sample | |
spec: | |
hosts: | |
- {name: ks-k8s-master-0, address: 192.168.9.91, internalAddress: 192.168.9.91, user: root, password: "F@ywwpTj4bJtYwzpwCqD"} | |
- {name: ks-k8s-master-1, address: 192.168.9.92, internalAddress: 192.168.9.92, user: root, password: "F@ywwpTj4bJtYwzpwCqD"} | |
- {name: ks-k8s-master-2, address: 192.168.9.93, internalAddress: 192.168.9.93, user: root, password: "F@ywwpTj4bJtYwzpwCqD"} | |
roleGroups: | |
etcd: | |
- ks-k8s-master-0 | |
- ks-k8s-master-1 | |
- ks-k8s-master-2 | |
control-plane: | |
- ks-k8s-master-0 | |
- ks-k8s-master-1 | |
- ks-k8s-master-2 | |
worker: | |
- ks-k8s-master-0 | |
- ks-k8s-master-1 | |
- ks-k8s-master-2 | |
controlPlaneEndpoint: | |
## Internal loadbalancer for apiservers | |
internalLoadbalancer: haproxy | |
domain: lb.zdevops.com.cn | |
address: "" | |
port: 6443 | |
kubernetes: | |
version: v1.24.1 | |
clusterName: zdevops.com.cn | |
autoRenewCerts: true | |
containerManager: containerd | |
etcd: | |
type: kubekey | |
network: | |
plugin: calico | |
kubePodsCIDR: 10.233.64.0/18 | |
kubeServiceCIDR: 10.233.0.0/18 | |
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni | |
multusCNI: | |
enabled: false | |
registry: | |
type: "harbor" | |
auths: | |
"registry.zdevops.com.cn": | |
username: admin | |
password: Harbor12345 | |
privateRegistry: "registry.zdevops.com.cn" | |
namespaceOverride: "kubesphereio" | |
registryMirrors: [] | |
insecureRegistries: [] | |
addons: [] | |
# 上面的内容不批改,不做展现 |
在 Harbor 中创立我的项目
本文采纳提前部署好的 Harbor 来寄存镜像,部署过程参考我之前写的 基于 KubeSphere 玩转 k8s-Harbor 装置手记。
你能够应用 kk 工具主动部署 Harbor,具体参考官网离线部署文档。
- 下载创立我的项目脚本模板
$ curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh
- 依据理论状况批改我的项目脚本
#!/usr/bin/env bash | |
# Harbor 仓库地址 | |
url="https://registry.zdevops.com.cn" | |
# 拜访 Harbor 仓库用户 | |
user="admin" | |
# 拜访 Harbor 仓库用户明码 | |
passwd="Harbor12345" | |
# 须要创立的我的项目名列表,失常只须要创立一个 **kubesphereio** 即可,这里为了保留变量可扩展性多写了两个。harbor_projects=(library | |
kubesphereio | |
kubesphere | |
) | |
for project in "${harbor_projects[@]}"; do | |
echo "creating $project" | |
curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{\"project_name\": \"${project}\", \"public\": true}" | |
done |
- 执行脚本创立我的项目
$ sh create_project_harbor.sh
推送离线镜像到 Harbor 仓库
将提前准备好的离线镜像推送到 Harbor 仓库,这一步为可选项,因为创立集群的时候也会再次推送镜像。为了部署一次成功率,倡议先推送。
$ ./kk artifact image push -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz
创立集群并装置 OS 依赖
$ ./kk create cluster -f config-sample.yaml -a kubesphere-v3.3.0-artifact.tar.gz --with-packages
参数阐明
- config-sample.yaml:离线环境集群的配置文件。
- kubesphere-v3.3.0-artifact.tar.gz:制品包的 tar 包镜像。
- –with-packages:若须要装置操作系统依赖,需指定该选项。
查看集群状态
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
正确装置实现后,您会看到以下内容:
************************************************** | |
Collecting installation results ... | |
##################################################### | |
### Welcome to KubeSphere! ### | |
##################################################### | |
Console: http://192.168.9.91:30880 | |
Account: admin | |
Password: P@88w0rd | |
NOTES:1. After you log into the console, please check the | |
monitoring status of service components in | |
"Cluster Management". If any service is not | |
ready, please wait patiently until all components | |
are up and running. | |
2. Please change the default password after login. | |
##################################################### | |
https://kubesphere.io 2022-06-30 14:30:19 | |
##################################################### |
登录 Web 控制台
通过 http://{IP}:30880
应用默认帐户和明码 admin/P@88w0rd
拜访 KubeSphere 的 Web 控制台,进行后续的操作配置。
总结
感谢您残缺的浏览完本文,为此,您应该 Get 到了以下技能
- 理解了清单 (manifest) 和制品 (artifact) 的概念
- 理解 manifest 和 image 资源的获取地址
- 手写 manifest 清单
- 依据 manifest 清单制作 artifact
- 离线部署 KubeSphere 和 Kubernetes
- Harbor 镜像仓库主动创立我的项目
- Ansible 应用的小技巧
目前为止,咱们曾经实现了最小化环境的 KubeSphere 和 K8s 集群的部署。然而,这仅仅是一个开始,后续还有很多配置和应用技巧,敬请期待 …
本文由博客一文多发平台 OpenWrite 公布!