一. 为什么是k8s v1.16.0?

最新版的v1.16.2试过了,始终无奈装置实现,装置到kubeadm init那一步执行后,报了很多错,如:node xxx not found等。centos7都重装了几次,还是无奈解决。用了一天都没装置完,差点放弃。起初在网上搜到的装置教程根本都是v1.16.0的,我不太置信是v1.16.2的坑所以先前没打算降级到v1.16.0。没方法了就试着装置v1.16.0版本,居然胜利了。记录在此,防止后来者踩坑。

本篇文章,装置大步骤如下:

  • 装置docker-ce 18.09.9(所有机器)
  • 设置k8s环境前置条件(所有机器)
  • 装置k8s v1.16.0 master治理节点
  • 装置k8s v1.16.0 node工作节点
  • 装置flannel(master)

这里有重要的一步,请记住本人master和node之间通信的ip,如我的master的ip为192.168.99.104,node的ip为:192.168.99.105. 请确保应用这两个ip在master和node上能相互ping通,这个master的ip 192.168.99.104接下来配置k8s的时候须要用到。

我的环境:

  • 操作系统:win10
  • 虚拟机:virtual box
  • linux发行版:CentOS7
  • linux内核(应用uname -r查看):3.10.0-957.el7.x86_64
  • master和node节点通信的ip(master):192.168.99.104

二. 装置docker-ce 18.09.9(所有机器)

所有装置k8s的机器都须要装置docker,命令如下:

# 装置docker所需的工具yum install -y yum-utils device-mapper-persistent-data lvm2# 配置阿里云的docker源yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 指定装置这个版本的docker-ceyum install -y docker-ce-18.09.9-3.el7# 启动dockersystemctl enable docker && systemctl start docker

三. 设置k8s环境筹备条件(所有机器)

装置k8s的机器须要2个CPU和2g内存以上,这个简略,在虚拟机外面配置一下就能够了。而后执行以下脚本做一些筹备操作。所有装置k8s的机器都须要这一步操作。

# 敞开防火墙systemctl disable firewalldsystemctl stop firewalld# 敞开selinux# 长期禁用selinuxsetenforce 0# 永恒敞开 批改/etc/sysconfig/selinux文件设置sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinuxsed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config# 禁用替换分区swapoff -a# 永恒禁用,关上/etc/fstab正文掉swap那一行。sed -i 's/.*swap.*/#&/' /etc/fstab# 批改内核参数cat <<EOF >  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system

四. 装置k8s v1.16.0 master治理节点

如果还没装置docker,请参照本文步骤二装置docker-ce 18.09.9(所有机器)装置。如果没设置k8s环境筹备条件,请参照本文步骤三设置k8s环境筹备条件(所有机器)执行。

以上两个步骤查看结束之后,持续以下步骤。

  1. 装置kubeadm、kubelet、kubectl

因为官网k8s源在google,国内无法访问,这里应用阿里云yum源

# 执行配置k8s阿里云源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF# 装置kubeadm、kubectl、kubeletyum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0# 启动kubelet服务systemctl enable kubelet && systemctl start kubelet
  1. 初始化k8s 以下这个命令开始装置k8s须要用到的docker镜像,因为无法访问到国外网站,所以这条命令应用的是国内的阿里云的源(registry.aliyuncs.com/google_containers)。另一个十分重要的是:这里的--apiserver-advertise-address应用的是master和node间能相互ping通的ip,我这里是192.168.99.104,刚开始在这里被坑了一个早晨,你请本人批改下ip执行。这条命令执行时会卡在[preflight] You can also perform this action in beforehand using ''kubeadm config images pull,大略须要2分钟,请急躁期待。
# 下载治理节点中用到的6个docker镜像,你能够应用docker images查看到# 这里须要大略两分钟期待,会卡在[preflight] You can also perform this action in beforehand using ''kubeadm config images pullkubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address 192.168.99.104 --pod-network-cidr=10.244.0.0/16 --token-ttl 0

下面装置完后,会提醒你输出如下命令,复制粘贴过去,执行即可。

# 下面装置实现后,k8s会提醒你输出如下命令,执行mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. 记住node退出集群的命令 下面kubeadm init执行胜利后会返回给你node节点退出集群的命令,等会要在node节点上执行,须要保留下来,如果遗记了,能够应用如下命令获取。
kubeadm token create --print-join-command

以上,装置master节点结束。能够应用kubectl get nodes查看一下,此时master处于NotReady状态,临时不必管。

五. 装置k8s v1.16.0 node工作节点

如果还没装置docker,请参照本文步骤二装置docker-ce 18.09.9(所有机器)装置。如果没设置k8s环境筹备条件,请参照本文步骤三设置k8s环境筹备条件(所有机器)执行。

以上两个步骤查看结束之后,持续以下步骤。

  1. 装置kubeadm、kubelet
# 执行配置k8s阿里云源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF# 装置kubeadm、kubectl、kubeletyum install -y  kubeadm-1.16.0-0 kubelet-1.16.0-0# 启动kubelet服务systemctl enable kubelet && systemctl start kubelet
  1. 退出集群 这里退出集群的命令每个人都不一样,能够登录master节点,应用kubeadm token create --print-join-command 来获取。获取后执行如下。
# 退出集群,如果这里不晓得退出集群的命令,能够登录master节点,应用kubeadm token create --print-join-command 来获取kubeadm join 192.168.99.104:6443 --token ncfrid.7ap0xiseuf97gikl     --discovery-token-ca-cert-hash sha256:47783e9851a1a517647f1986225f104e81dbfd8fb256ae55ef6d68ce9334c6a2

退出胜利后,能够在master节点上应用kubectl get nodes命令查看到退出的节点。

六. 装置flannel(master机器)

以上步骤装置完后,机器搭建起来了,但状态还是NotReady状态,如下图,master机器须要装置flanneld。

  1. 下载官网fannel配置文件 应用wget命令,地址为:(https://raw.githubusercontent...,这个地址国内拜访不了,所以我把内容复制下来,为了防止后面文章过长,我把它粘贴到文章开端第八个步骤附录了。这个yml配置文件中配置了一个国内无法访问的地址(quay.io),我曾经将其改为国内能够拜访的地址(quay-mirror.qiniu.com)。咱们新建一个kube-flannel.yml文件,复制粘贴该内容即可。
  2. 装置fannel
kubectl apply -f kube-flannel.yml

七. 功败垂成

至此,k8s集群搭建实现,如下图节点已为Ready状态,功败垂成,完结撒花。

八. 附录

这是kube-flannel.yml文件的内容,曾经将无法访问的地址(quay.io)全副改为国内能够拜访的地址(quay-mirror.qiniu.com)。咱们新建一个kube-flannel.yml文件,复制粘贴该内容即可。

---apiVersion: policy/v1beta1kind: PodSecurityPolicymetadata:  name: psp.flannel.unprivileged  annotations:    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/defaultspec:  privileged: false  volumes:    - configMap    - secret    - emptyDir    - hostPath  allowedHostPaths:    - pathPrefix: "/etc/cni/net.d"    - pathPrefix: "/etc/kube-flannel"    - pathPrefix: "/run/flannel"  readOnlyRootFilesystem: false  # Users and groups  runAsUser:    rule: RunAsAny  supplementalGroups:    rule: RunAsAny  fsGroup:    rule: RunAsAny  # Privilege Escalation  allowPrivilegeEscalation: false  defaultAllowPrivilegeEscalation: false  # Capabilities  allowedCapabilities: ['NET_ADMIN']  defaultAddCapabilities: []  requiredDropCapabilities: []  # Host namespaces  hostPID: false  hostIPC: false  hostNetwork: true  hostPorts:  - min: 0    max: 65535  # SELinux  seLinux:    # SELinux is unused in CaaSP    rule: 'RunAsAny'---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: flannelrules:  - apiGroups: ['extensions']    resources: ['podsecuritypolicies']    verbs: ['use']    resourceNames: ['psp.flannel.unprivileged']  - apiGroups:      - ""    resources:      - pods    verbs:      - get  - apiGroups:      - ""    resources:      - nodes    verbs:      - list      - watch  - apiGroups:      - ""    resources:      - nodes/status    verbs:      - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds-amd64  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:              - matchExpressions:                  - key: beta.kubernetes.io/os                    operator: In                    values:                      - linux                  - key: beta.kubernetes.io/arch                    operator: In                    values:                      - amd64      hostNetwork: true      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run/flannel        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds-arm64  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:              - matchExpressions:                  - key: beta.kubernetes.io/os                    operator: In                    values:                      - linux                  - key: beta.kubernetes.io/arch                    operator: In                    values:                      - arm64      hostNetwork: true      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:             add: ["NET_ADMIN"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run/flannel        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds-arm  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:              - matchExpressions:                  - key: beta.kubernetes.io/os                    operator: In                    values:                      - linux                  - key: beta.kubernetes.io/arch                    operator: In                    values:                      - arm      hostNetwork: true      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:             add: ["NET_ADMIN"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run/flannel        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds-ppc64le  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:              - matchExpressions:                  - key: beta.kubernetes.io/os                    operator: In                    values:                      - linux                  - key: beta.kubernetes.io/arch                    operator: In                    values:                      - ppc64le      hostNetwork: true      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:             add: ["NET_ADMIN"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run/flannel        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds-s390x  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:              - matchExpressions:                  - key: beta.kubernetes.io/os                    operator: In                    values:                      - linux                  - key: beta.kubernetes.io/arch                    operator: In                    values:                      - s390x      hostNetwork: true      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:             add: ["NET_ADMIN"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run/flannel        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg

原文:https://www.jianshu.com/p/25c...