乐趣区

关于kubesphere:离线环境在Kubernetes安装KubeSphere-v310

KubeSphere 官网地址:https://kubesphere.com.cn
离线装置 kubernetes 集群参考 sealyun 官网地址:https://www.sealyun.com

前置要求

  1. 先装置好 kubernetes,离线装置 kubernetes 集群参考地址:https://www.sealyun.com
  2. 批改 Kubernetes 集群中所有节点中 docker 的 daemon.json 文件

    vi /etc/docker/daemon.json
  3. 增加 Harbor 公有仓库地址
    如果不配置后续在装置 KubeSphere 组件时候会发送 htts 申请,这里咱们是应用的 http,因为在内网离线环境中不须要应用 https

    {
      "insecure-registries":
            ["192.168.28.150:8001"]
    }
  4. 重启 docker 服务,让配置失效

    systemctl daemon-reload
    systemctl restart docker
  5. 登录 Harbor

    docker login 192.168.28.150:8001 -u 用户名 -p 明码

装置 NFS 服务端

1. 安装包
  在能够联网的机器上下载 NFS 安装包,当前在服务端和客户端装置应用。yum -y install nfs-utils --downloadonly --downloaddir /home/nfs
  下载好后打包上传到要装置的服务器中,执行上面命令开始装置
rpm -Uvh *.rpm --nodeps --force

2. 编辑配置文件
⚠️配置文件中的 * 是容许所有网段,依据本人理论状况写明网段
cat >/etc/exports <<EOF
/mnt/kubesphere *(insecure,rw,async,no_root_squash) 
EOF

3. 创立目录并批改权限
⚠️这里为了不便试验授予了挂载目录权限为 777,请依据理论状况批改目录权限和所有者
mkdir /mnt/kubesphere && chmod 777 /mnt/kubesphere

4. 启动服务
systemctl enable nfs-server rpcbind && systemctl start nfs-server rpcbind

配置 NFS 客户端

在所有 node 节点中装置 NFS 客户端

rpm -Uvh *.rpm --nodeps --force

在 k8s 集群中的任意节点中执行如下所以操作

创立文件 vi storageclass.yaml 内容如下:

cat >storageclass.yaml <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
   namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  selector:
    matchLabels:
      app: nfs-client-provisioner
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 192.168.28.150:8001/kubesphere-install/kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.28.160
            - name: NFS_PATH
              value: /mnt/kubesphere
      volumes:
        - name: nfs-client
          nfs:
            server: 192.168.28.160
            path: /mnt/kubesphere
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
EOF
  1. 利用 storageclass.yaml

    kubectl apply -f storageclass.yaml
  2. 设置默认 strorageclass

    kubectl patch storageclass nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  3. 验证

    kubectl get pods
    
    NAME                                      READY   STATUS    RESTARTS   AGE
    nfs-client-provisioner-7b9746695c-nrz4n   1/1     Running   0          2m38s

    查看默认存储

    kubectl get sc
    
    NAME                    PROVISIONER      AGE
    nfs-storage (default)   fuseim.pri/ifs   7m22s

准装置镜像

当您在离线环境中装置 KubeSphere 时,须要当时筹备一个蕴含所有必须镜像的镜像包。

  • 应用以下命令从可能拜访互联网的机器上下载镜像清单文件 images-list.txt:

    curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/images-list.txt

该文件依据不同的模块列出了 ##+modulename 下的镜像。您能够依照雷同的规定把本人的镜像增加到这个文件中。要查看残缺文件,请参见附录。

  • 下载 offline-installation-tool.sh。

    curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/offline-installation-tool.sh
  • 使 .sh 文件可执行。

    chmod +x offline-installation-tool.sh
  • 您能够执行命令 ./offline-installation-tool.sh -h 来查看如何应用脚本:

    root@master:/home/ubuntu# ./offline-installation-tool.sh -h
    Usage:
     
    ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION]
     
    Description:
    -b                     : save kubernetes' binaries.
    -d IMAGES-DIR          : the dir of files (tar.gz) which generated by `docker save`. default: ./kubesphere-images
    -l IMAGES-LIST         : text file with list of images.
    -r PRIVATE-REGISTRY    : target private registry:port.
    -s                     : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
    -v KUBERNETES-VERSION  : download kubernetes' binaries. default: v1.17.9
    -h                     : usage message
  • 在 offline-installation-tool.sh 中拉取镜像。

    ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images

    推送镜像至公有仓库

    将打包的镜像文件传输至您的本地机器,并运行以下命令把它推送至仓库。

./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r 192.168.28.150:8001/kubesphere-install

命令中的域名是 192.168.28.150:8001/kubesphere-install。请确保应用您本人仓库的地址。

下载部署文件

与在现有 Kubernetes 集群上在线装置 KubeSphere 类似,您也须要当时下载 cluster-configuration.yamlkubesphere-installer.yaml

  • 执行以下命令下载这两个文件,并将它们传输至您充当工作机的机器,用于装置。

    curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml
    curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/kubesphere-installer.yaml
  • 编辑 cluster-configuration.yaml 增加您的公有镜像仓库。例如,本教程中的仓库地址是 192.168.28.150:8001/kubesphere-install,将它用作 .spec.local_registry 的值,如下所示:
spec:
  persistence:
    storageClass: "" # 下面步骤曾经装置了 nfs 作为默认存储,这里就不须要指定了
  authentication:
    jwtSecret: ""
  local_registry: 192.168.28.150:8001/kubesphere-install # Add this line manually; make sure you use your own registry address.

您能够在该 YAML 文件中启用可插拔组件,体验 KubeSphere 的更多功能。无关详情,请参考启用可插拔组件。

  • 编辑实现后保留 cluster-configuration.yaml。应用以下命令将 ks-installer 替换为您本人仓库的地址。
sed -i "s#^\s*image: kubesphere.*/ks-installer:.*#        image: 192.168.28.150:8001/kubesphere-install/kubesphere/ks-installer:v3.1.0#" kubesphere-installer.yaml

命令中的仓库地址是 192.168.28.150:8001/kubesphere-install。请确保应用您本人仓库的地址。

开始装置

确定实现下面所有步骤后,您能够执行以下命令。

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml

查看装置日志:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

验证装置
装置实现后,您会看到以下内容:

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.28.140:30880
Account: admin
Password: P@88w0rd

NOTES:1. After logging into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are ready.
  2. Please modify the default password after login.

#####################################################
https://kubesphere.io             2021-02-09 13:50:43
#####################################################

当初,您能够通过 http://{IP}:30880 应用默认帐户和明码 admin/P@88w0rd 拜访 KubeSphere 的 Web 控制台。
要拜访控制台,请确保在您的平安组中关上端口 30880。

附录

KubeSphere v3.1.0 镜像清单

##k8s-images
kubesphere/kube-apiserver:v1.20.4
kubesphere/kube-scheduler:v1.20.4
kubesphere/kube-proxy:v1.20.4
kubesphere/kube-controller-manager:v1.20.4
kubesphere/kube-apiserver:v1.19.8
kubesphere/kube-scheduler:v1.19.8
kubesphere/kube-proxy:v1.19.8
kubesphere/kube-controller-manager:v1.19.8
kubesphere/kube-apiserver:v1.18.6
kubesphere/kube-scheduler:v1.18.6
kubesphere/kube-proxy:v1.18.6
kubesphere/kube-controller-manager:v1.18.6
kubesphere/kube-apiserver:v1.17.9
kubesphere/kube-scheduler:v1.17.9
kubesphere/kube-proxy:v1.17.9
kubesphere/kube-controller-manager:v1.17.9
kubesphere/pause:3.1
kubesphere/pause:3.2
kubesphere/etcd:v3.4.13
calico/cni:v3.16.3
calico/kube-controllers:v3.16.3
calico/node:v3.16.3
calico/pod2daemon-flexvol:v3.16.3
coredns/coredns:1.6.9
kubesphere/k8s-dns-node-cache:1.15.12
openebs/provisioner-localpv:2.3.0
openebs/linux-utils:2.3.0
kubesphere/nfs-client-provisioner:v3.1.0-k8s1.11
##csi-images
csiplugin/csi-neonsan:v1.2.0
csiplugin/csi-neonsan-ubuntu:v1.2.0
csiplugin/csi-neonsan-centos:v1.2.0
csiplugin/csi-provisioner:v1.5.0
csiplugin/csi-attacher:v2.1.1
csiplugin/csi-resizer:v0.4.0
csiplugin/csi-snapshotter:v2.0.1
csiplugin/csi-node-driver-registrar:v1.2.0
csiplugin/csi-qingcloud:v1.2.0
##kubesphere-images
kubesphere/ks-apiserver:v3.1.0
kubesphere/ks-console:v3.1.0
kubesphere/ks-controller-manager:v3.1.0
kubesphere/ks-installer:v3.1.0
kubesphere/kubectl:v1.17.0
kubesphere/kubectl:v1.18.0
kubesphere/kubectl:v1.19.0
kubesphere/kubectl:v1.20.0
redis:5.0.5-alpine
alpine:3.10.4
haproxy:2.0.4
nginx:1.14-alpine
minio/minio:RELEASE.2019-08-07T01-59-21Z
minio/mc:RELEASE.2019-08-07T23-14-43Z
mirrorgooglecontainers/defaultbackend-amd64:1.4
kubesphere/nginx-ingress-controller:v0.35.0
osixia/openldap:1.3.0
csiplugin/snapshot-controller:v2.0.1
kubesphere/kubefed:v0.7.0
kubesphere/tower:v0.2.0
kubesphere/prometheus-config-reloader:v0.42.1
kubesphere/prometheus-operator:v0.42.1
prom/alertmanager:v0.21.0
prom/prometheus:v2.26.0
prom/node-exporter:v0.18.1
kubesphere/ks-alerting-migration:v3.1.0
jimmidyson/configmap-reload:v0.3.0
kubesphere/notification-manager-operator:v1.0.0
kubesphere/notification-manager:v1.0.0
kubesphere/metrics-server:v0.4.2
kubesphere/kube-rbac-proxy:v0.8.0
kubesphere/kube-state-metrics:v1.9.7
openebs/provisioner-localpv:2.3.0
thanosio/thanos:v0.18.0
grafana/grafana:7.4.3
##kubesphere-logging-images
kubesphere/elasticsearch-oss:6.7.0-1
kubesphere/elasticsearch-curator:v5.7.6
kubesphere/fluentbit-operator:v0.5.0
kubesphere/fluentbit-operator:migrator
kubesphere/fluent-bit:v1.6.9
elastic/filebeat:6.7.0
kubesphere/kube-auditing-operator:v0.1.2
kubesphere/kube-auditing-webhook:v0.1.2
kubesphere/kube-events-exporter:v0.1.0
kubesphere/kube-events-operator:v0.1.0
kubesphere/kube-events-ruler:v0.2.0
kubesphere/log-sidecar-injector:1.1
docker:19.03
##istio-images
istio/pilot:1.6.10
istio/proxyv2:1.6.10
jaegertracing/jaeger-agent:1.17
jaegertracing/jaeger-collector:1.17
jaegertracing/jaeger-es-index-cleaner:1.17
jaegertracing/jaeger-operator:1.17.1
jaegertracing/jaeger-query:1.17
kubesphere/kiali:v1.26.1
kubesphere/kiali-operator:v1.26.1
##kubesphere-devops-images
kubesphere/ks-jenkins:2.249.1
jenkins/jnlp-slave:3.27-1
kubesphere/s2ioperator:v3.1.0
kubesphere/s2irun:v2.1.1
kubesphere/builder-base:v2.1.0
kubesphere/builder-nodejs:v2.1.0
kubesphere/builder-maven:v2.1.0
kubesphere/builder-go:v2.1.0
kubesphere/s2i-binary:v2.1.0
kubesphere/tomcat85-java11-centos7:v2.1.0
kubesphere/tomcat85-java11-runtime:v2.1.0
kubesphere/tomcat85-java8-centos7:v2.1.0
kubesphere/tomcat85-java8-runtime:v2.1.0
kubesphere/java-11-centos7:v2.1.0
kubesphere/java-8-centos7:v2.1.0
kubesphere/java-8-runtime:v2.1.0
kubesphere/java-11-runtime:v2.1.0
kubesphere/nodejs-8-centos7:v2.1.0
kubesphere/nodejs-6-centos7:v2.1.0
kubesphere/nodejs-4-centos7:v2.1.0
kubesphere/python-36-centos7:v2.1.0
kubesphere/python-35-centos7:v2.1.0
kubesphere/python-34-centos7:v2.1.0
kubesphere/python-27-centos7:v2.1.0
kubesphere/notification:flyway_v2.1.2
kubesphere/notification:v2.1.2
##openpitrix-images
kubesphere/openpitrix-jobs:v3.1.0
##weave-scope-images
weaveworks/scope:1.13.0
##kubeedge-images
kubeedge/cloudcore:v1.6.1
kubesphere/edge-watcher:v0.1.0
kubesphere/kube-rbac-proxy:v0.5.0
kubesphere/edge-watcher-agent:v0.1.0
##example-images-images
kubesphere/examples-bookinfo-productpage-v1:1.16.2
kubesphere/examples-bookinfo-reviews-v1:1.16.2
kubesphere/examples-bookinfo-reviews-v2:1.16.2
kubesphere/examples-bookinfo-reviews-v3:1.16.2
kubesphere/examples-bookinfo-details-v1:1.16.2
kubesphere/examples-bookinfo-ratings-v1:1.16.3
busybox:1.31.1
joosthofman/wget:1.0
kubesphere/netshoot:v1.0
nginxdemos/hello:plain-text
wordpress:4.8-apache
mirrorgooglecontainers/hpa-example:latest
java:openjdk-8-jre-alpine
fluent/fluentd:v1.4.2-2.0
perl:latest
退出移动版