KubeSphere官网地址:https://kubesphere.com.cn
离线装置kubernetes集群参考sealyun官网地址:https://www.sealyun.com
前置要求
- 先装置好kubernetes,离线装置kubernetes集群参考地址:https://www.sealyun.com
批改Kubernetes集群中所有节点中docker的daemon.json文件
vi /etc/docker/daemon.json
增加Harbor公有仓库地址
如果不配置后续在装置KubeSphere组件时候会发送htts申请,这里咱们是应用的http,因为在内网离线环境中不须要应用https{ "insecure-registries": ["192.168.28.150:8001"]}
重启docker服务,让配置失效
systemctl daemon-reloadsystemctl restart docker
登录Harbor
docker login 192.168.28.150:8001 -u 用户名 -p 明码
装置NFS服务端
1.安装包 在能够联网的机器上下载NFS安装包,当前在服务端和客户端装置应用。yum -y install nfs-utils --downloadonly --downloaddir /home/nfs 下载好后打包上传到要装置的服务器中,执行上面命令开始装置rpm -Uvh *.rpm --nodeps --force2.编辑配置文件⚠️配置文件中的*是容许所有网段,依据本人理论状况写明网段cat >/etc/exports <<EOF/mnt/kubesphere *(insecure,rw,async,no_root_squash) EOF3.创立目录并批改权限⚠️这里为了不便试验授予了挂载目录权限为777,请依据理论状况批改目录权限和所有者mkdir /mnt/kubesphere && chmod 777 /mnt/kubesphere4.启动服务systemctl enable nfs-server rpcbind && systemctl start nfs-server rpcbind
配置NFS客户端
在所有node节点中装置NFS客户端
rpm -Uvh *.rpm --nodeps --force
在k8s集群中的任意节点中执行如下所以操作
创立文件 vi storageclass.yaml内容如下:
cat >storageclass.yaml <<EOF---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-provisioner-runner namespace: defaultrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects: - kind: ServiceAccount name: nfs-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: DeploymentapiVersion: apps/v1metadata: name: nfs-client-provisionerspec: selector: matchLabels: app: nfs-client-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: 192.168.28.150:8001/kubesphere-install/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11 imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.28.160 - name: NFS_PATH value: /mnt/kubesphere volumes: - name: nfs-client nfs: server: 192.168.28.160 path: /mnt/kubesphere---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: nfs-storageprovisioner: fuseim.pri/ifsreclaimPolicy: RetainEOF
利用storageclass.yaml
kubectl apply -f storageclass.yaml
设置默认strorageclass
kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
验证
kubectl get podsNAME READY STATUS RESTARTS AGEnfs-client-provisioner-7b9746695c-nrz4n 1/1 Running 0 2m38s
查看默认存储
kubectl get scNAME PROVISIONER AGEnfs-storage (default) fuseim.pri/ifs 7m22s
准装置镜像
当您在离线环境中装置 KubeSphere 时,须要当时筹备一个蕴含所有必须镜像的镜像包。
应用以下命令从可能拜访互联网的机器上下载镜像清单文件 images-list.txt:
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/images-list.txt
该文件依据不同的模块列出了 ##+modulename 下的镜像。您能够依照雷同的规定把本人的镜像增加到这个文件中。要查看残缺文件,请参见附录。
下载 offline-installation-tool.sh。
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/offline-installation-tool.sh
使 .sh 文件可执行。
chmod +x offline-installation-tool.sh
您能够执行命令 ./offline-installation-tool.sh -h 来查看如何应用脚本:
root@master:/home/ubuntu# ./offline-installation-tool.sh -hUsage: ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ] Description:-b : save kubernetes' binaries.-d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: ./kubesphere-images-l IMAGES-LIST : text file with list of images.-r PRIVATE-REGISTRY : target private registry:port.-s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.-v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9-h : usage message
在 offline-installation-tool.sh 中拉取镜像。
./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
推送镜像至公有仓库
将打包的镜像文件传输至您的本地机器,并运行以下命令把它推送至仓库。
./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r 192.168.28.150:8001/kubesphere-install
命令中的域名是 192.168.28.150:8001/kubesphere-install。请确保应用您本人仓库的地址。
下载部署文件
与在现有 Kubernetes 集群上在线装置 KubeSphere 类似,您也须要当时下载 cluster-configuration.yaml 和 kubesphere-installer.yaml。
执行以下命令下载这两个文件,并将它们传输至您充当工作机的机器,用于装置。
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yamlcurl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/kubesphere-installer.yaml
- 编辑 cluster-configuration.yaml 增加您的公有镜像仓库。例如,本教程中的仓库地址是 192.168.28.150:8001/kubesphere-install,将它用作 .spec.local_registry 的值,如下所示:
spec: persistence: storageClass: "" # 下面步骤曾经装置了nfs作为默认存储,这里就不须要指定了 authentication: jwtSecret: "" local_registry: 192.168.28.150:8001/kubesphere-install # Add this line manually; make sure you use your own registry address.
您能够在该 YAML 文件中启用可插拔组件,体验 KubeSphere 的更多功能。无关详情,请参考启用可插拔组件。
- 编辑实现后保留 cluster-configuration.yaml。应用以下命令将 ks-installer 替换为您本人仓库的地址。
sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: 192.168.28.150:8001/kubesphere-install/kubesphere/ks-installer:v3.1.0#" kubesphere-installer.yaml
命令中的仓库地址是 192.168.28.150:8001/kubesphere-install。请确保应用您本人仓库的地址。
开始装置
确定实现下面所有步骤后,您能够执行以下命令。
kubectl apply -f kubesphere-installer.yamlkubectl apply -f cluster-configuration.yaml
查看装置日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
验证装置
装置实现后,您会看到以下内容:
######################################################## Welcome to KubeSphere! ########################################################Console: http://192.168.28.140:30880Account: adminPassword: P@88w0rdNOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are ready. 2. Please modify the default password after login.#####################################################https://kubesphere.io 2021-02-09 13:50:43#####################################################
当初,您能够通过 http://{IP}:30880 应用默认帐户和明码 admin/P@88w0rd 拜访 KubeSphere 的 Web 控制台。
要拜访控制台,请确保在您的平安组中关上端口 30880。
附录
KubeSphere v3.1.0 镜像清单
##k8s-imageskubesphere/kube-apiserver:v1.20.4kubesphere/kube-scheduler:v1.20.4kubesphere/kube-proxy:v1.20.4kubesphere/kube-controller-manager:v1.20.4kubesphere/kube-apiserver:v1.19.8kubesphere/kube-scheduler:v1.19.8kubesphere/kube-proxy:v1.19.8kubesphere/kube-controller-manager:v1.19.8kubesphere/kube-apiserver:v1.18.6kubesphere/kube-scheduler:v1.18.6kubesphere/kube-proxy:v1.18.6kubesphere/kube-controller-manager:v1.18.6kubesphere/kube-apiserver:v1.17.9kubesphere/kube-scheduler:v1.17.9kubesphere/kube-proxy:v1.17.9kubesphere/kube-controller-manager:v1.17.9kubesphere/pause:3.1kubesphere/pause:3.2kubesphere/etcd:v3.4.13calico/cni:v3.16.3calico/kube-controllers:v3.16.3calico/node:v3.16.3calico/pod2daemon-flexvol:v3.16.3coredns/coredns:1.6.9kubesphere/k8s-dns-node-cache:1.15.12openebs/provisioner-localpv:2.3.0openebs/linux-utils:2.3.0kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11##csi-imagescsiplugin/csi-neonsan:v1.2.0csiplugin/csi-neonsan-ubuntu:v1.2.0csiplugin/csi-neonsan-centos:v1.2.0csiplugin/csi-provisioner:v1.5.0csiplugin/csi-attacher:v2.1.1csiplugin/csi-resizer:v0.4.0csiplugin/csi-snapshotter:v2.0.1csiplugin/csi-node-driver-registrar:v1.2.0csiplugin/csi-qingcloud:v1.2.0##kubesphere-imageskubesphere/ks-apiserver:v3.1.0kubesphere/ks-console:v3.1.0kubesphere/ks-controller-manager:v3.1.0kubesphere/ks-installer:v3.1.0kubesphere/kubectl:v1.17.0kubesphere/kubectl:v1.18.0kubesphere/kubectl:v1.19.0kubesphere/kubectl:v1.20.0redis:5.0.5-alpinealpine:3.10.4haproxy:2.0.4nginx:1.14-alpineminio/minio:RELEASE.2019-08-07T01-59-21Zminio/mc:RELEASE.2019-08-07T23-14-43Zmirrorgooglecontainers/defaultbackend-amd64:1.4kubesphere/nginx-ingress-controller:v0.35.0osixia/openldap:1.3.0csiplugin/snapshot-controller:v2.0.1kubesphere/kubefed:v0.7.0kubesphere/tower:v0.2.0kubesphere/prometheus-config-reloader:v0.42.1kubesphere/prometheus-operator:v0.42.1prom/alertmanager:v0.21.0prom/prometheus:v2.26.0prom/node-exporter:v0.18.1kubesphere/ks-alerting-migration:v3.1.0jimmidyson/configmap-reload:v0.3.0kubesphere/notification-manager-operator:v1.0.0kubesphere/notification-manager:v1.0.0kubesphere/metrics-server:v0.4.2kubesphere/kube-rbac-proxy:v0.8.0kubesphere/kube-state-metrics:v1.9.7openebs/provisioner-localpv:2.3.0thanosio/thanos:v0.18.0grafana/grafana:7.4.3##kubesphere-logging-imageskubesphere/elasticsearch-oss:6.7.0-1kubesphere/elasticsearch-curator:v5.7.6kubesphere/fluentbit-operator:v0.5.0kubesphere/fluentbit-operator:migratorkubesphere/fluent-bit:v1.6.9elastic/filebeat:6.7.0kubesphere/kube-auditing-operator:v0.1.2kubesphere/kube-auditing-webhook:v0.1.2kubesphere/kube-events-exporter:v0.1.0kubesphere/kube-events-operator:v0.1.0kubesphere/kube-events-ruler:v0.2.0kubesphere/log-sidecar-injector:1.1docker:19.03##istio-imagesistio/pilot:1.6.10istio/proxyv2:1.6.10jaegertracing/jaeger-agent:1.17jaegertracing/jaeger-collector:1.17jaegertracing/jaeger-es-index-cleaner:1.17jaegertracing/jaeger-operator:1.17.1jaegertracing/jaeger-query:1.17kubesphere/kiali:v1.26.1kubesphere/kiali-operator:v1.26.1##kubesphere-devops-imageskubesphere/ks-jenkins:2.249.1jenkins/jnlp-slave:3.27-1kubesphere/s2ioperator:v3.1.0kubesphere/s2irun:v2.1.1kubesphere/builder-base:v2.1.0kubesphere/builder-nodejs:v2.1.0kubesphere/builder-maven:v2.1.0kubesphere/builder-go:v2.1.0kubesphere/s2i-binary:v2.1.0kubesphere/tomcat85-java11-centos7:v2.1.0kubesphere/tomcat85-java11-runtime:v2.1.0kubesphere/tomcat85-java8-centos7:v2.1.0kubesphere/tomcat85-java8-runtime:v2.1.0kubesphere/java-11-centos7:v2.1.0kubesphere/java-8-centos7:v2.1.0kubesphere/java-8-runtime:v2.1.0kubesphere/java-11-runtime:v2.1.0kubesphere/nodejs-8-centos7:v2.1.0kubesphere/nodejs-6-centos7:v2.1.0kubesphere/nodejs-4-centos7:v2.1.0kubesphere/python-36-centos7:v2.1.0kubesphere/python-35-centos7:v2.1.0kubesphere/python-34-centos7:v2.1.0kubesphere/python-27-centos7:v2.1.0kubesphere/notification:flyway_v2.1.2kubesphere/notification:v2.1.2##openpitrix-imageskubesphere/openpitrix-jobs:v3.1.0##weave-scope-imagesweaveworks/scope:1.13.0##kubeedge-imageskubeedge/cloudcore:v1.6.1kubesphere/edge-watcher:v0.1.0kubesphere/kube-rbac-proxy:v0.5.0kubesphere/edge-watcher-agent:v0.1.0##example-images-imageskubesphere/examples-bookinfo-productpage-v1:1.16.2kubesphere/examples-bookinfo-reviews-v1:1.16.2kubesphere/examples-bookinfo-reviews-v2:1.16.2kubesphere/examples-bookinfo-reviews-v3:1.16.2kubesphere/examples-bookinfo-details-v1:1.16.2kubesphere/examples-bookinfo-ratings-v1:1.16.3busybox:1.31.1joosthofman/wget:1.0kubesphere/netshoot:v1.0nginxdemos/hello:plain-textwordpress:4.8-apachemirrorgooglecontainers/hpa-example:latestjava:openjdk-8-jre-alpinefluent/fluentd:v1.4.2-2.0perl:latest