2019年3月6日:出版安装kubeadmin部署k8s集群教程本次安装采用kubeadmin !安装的k8s版本为1.13.3版,是当前最新版本!本篇文章,所使用的任何镜像和yaml都会在我的github上找到!github:https://github.com/heyangguang有任何问题可以直接联系我的Email:heyangev@cn.ibm.com主机列表:系统采用的是Centos7.5主机名IP地址角色k8smaster9.186.137.114masterk8snode-19.186.137.115nodek8snode-29.186.137.116Node基础环境准备:关闭防火墙、关闭selinux,关闭swap,关闭网卡管理器。k8smaster:[root@k8smaster ~]# systemctl stop firewalld[root@k8smaster ~]# systemctl disable firewalld[root@k8smaster ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager[root@k8smaster ~]# vim /etc/selinux/config[root@k8smaster ~]# scp /etc/selinux/config root@k8snode-1:/etc/selinux/configconfig 100% 546 1.1MB/s 00:00[root@k8smaster ~]# scp /etc/selinux/config root@k8snode-2:/etc/selinux/configconfig 100% 546 1.3MB/s 00:00[root@k8smaster ~]# swapoff -a[root@k8smaster ~]# vim /etc/fstab[root@k8smaster ~]# cat /etc/fstab## /etc/fstab# Created by anaconda on Mon Mar 4 17:23:04 2019## Accessible filesystems, by reference, are maintained under ‘/dev/disk’# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/centos-root / xfs defaults 0 0UUID=3dd5660e-0905-4f1e-9fa3-9ce664d6eb94 /boot xfs defaults 0 0/dev/mapper/centos-home /home xfs defaults 0 0#/dev/mapper/centos-swap swap swap defaults 0 0k8snode-1:[root@k8snode-1 ~]# systemctl stop firewalld[root@k8snode-1 ~]# systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@k8snode-1 ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager[root@k8snode-1 ~]# swapoff -a[root@k8snode-1 ~]# vim /etc/fstabk8snode-2:[root@k8snode-2 ~]# systemctl stop firewalld[root@k8snode-2 ~]# systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@k8snode-2 ~]# swapoff -a[root@k8snode-2 ~]# systemctl stop NetworkManager ; systemctl disable NetworkManager[root@k8snode-2 ~]# vim /etc/fstabyum源配置:需要k8snode-1和k8snode-2先操作,在把yum源copy过去。k8smaster:[root@k8smaster yum.repos.d]# rm -rf [root@k8smaster yum.repos.d]# ll总用量 12-rw-r–r– 1 root root 2206 3月 5 18:50 CentOS-Base.repo-rw-r–r– 1 root root 923 3月 5 18:50 epel.repo-rw-r–r– 1 root root 276 3月 5 18:50 k8s.repo[root@k8smaster yum.repos.d]# scp * k8snode-1:/etc/yum.scp: /etc/yum.: No such file or directory[root@k8smaster yum.repos.d]# scp * k8snode-1:/etc/yum.repos.d/CentOS-Base.repo 100% 2206 352.0KB/s 00:00epel.repo 100% 923 160.8KB/s 00:00k8s.repo 100% 276 48.2KB/s 00:00[root@k8smaster yum.repos.d]# scp * k8snode-2:/etc/yum.repos.d/CentOS-Base.repo 100% 2206 216.3KB/s 00:00epel.repo 100% 923 157.1KB/s 00:00k8s.repo 100% 276 47.5KB/s 00:00k8snode-1:[root@k8snode-1 ~]# cd /etc/yum.repos.d/[root@k8snode-1 yum.repos.d]# rm -rf k8snode-2:[root@k8snode-2 ~]# rm -rf /etc/yum.repos.d/安装Docker引擎:k8smaster:[root@k8smaster yum.repos.d]# yum -y install docker[root@k8smaster yum.repos.d]# systemctl start docker ; systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.k8snode-1:[root@k8snode-1 yum.repos.d]# yum -y install docker[root@k8snode-1 yum.repos.d]# systemctl start docker ; systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.k8snode-2:[root@k8snode-2 ~]# yum -y install docker[root@k8snode-2 ~]# systemctl start docker ; systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.设置k8s相关系统内核参数:k8smaster:[root@k8smaster ~]# cat << EOF > /etc/sysctl.d/k8s.conf> net.bridge.bridege-nf-call-iptables = 1> net.bridge.bridege-nf-call-ip6tables = 1> EOF[root@k8smaster ~]# sysctl -pk8snode-1:[root@k8snode-1 ~]# cat << EOF > /etc/sysctl.d/k8s.conf> net.bridge.bridege-nf-call-iptables = 1> net.bridge.bridege-nf-call-ip6tables = 1> EOF[root@k8snode-1 ~]# sysctl -pk8snode-2:[root@k8snode-2 ~]# cat << EOF > /etc/sysctl.d/k8s.conf> net.bridge.bridege-nf-call-iptables = 1> net.bridge.bridege-nf-call-ip6tables = 1> EOF[root@k8snode-2 ~]# sysctl -p安装kubernetes相关组件:k8smaster:[root@k8smaster ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 –disableexcludes=kubernetes[root@k8smaster ~]# systemctl start kubelet ; systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.k8snode-1:[root@k8snode-1 ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 –disableexcludes=kubernetes[root@k8snode-1 ~]# systemctl start kubelet ; systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.k8snode-2:[root@k8snode-2 ~]# yum install -y kubelet-1.13.3 kubeadm-1.11.1 kubectl-1.13.3 –disableexcludes=kubernetes[root@k8snode-2 ~]# systemctl start kubelet ; systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.导入所需要的镜像:为什么要导入镜像呢?因为kubeadmin会去谷歌的镜像源上面下载,大家都懂得!所以我下载下来,使用的时候直接导入就好了。所需要的镜像:k8s.gcr.io/kube-apiserver:v1.13.3k8s.gcr.io/kube-controller-manager:v1.13.3k8s.gcr.io/kube-scheduler:v1.13.3k8s.gcr.io/kube-proxy:v1.13.3k8s.gcr.io/etcd:3.2.24k8s.gcr.io/coredns:1.2.6k8s.gcr.io/pause:3.1这里所用到的镜像,我会全部打包到我的github上,需要的可以去下载。导入方式相同,只需要全部导入即可!k8smaster:[root@k8smaster ~]# lsanaconda-ks.cfg images.tar[root@k8smaster ~]# ll总用量 1834184-rw——-. 1 root root 1245 3月 4 17:27 anaconda-ks.cfg-rw-r–r–. 1 root root 1878197248 3月 5 18:31 images.tar[root@k8smaster ~]# docker load < images.tark8snode-1:[root@k8snode-1 ~]# lsanaconda-ks.cfg images.tar[root@k8snode-1 ~]# docker load < images.tark8snode-2:[root@k8snode-2 ~]# lsanaconda-ks.cfg images.tar[root@k8snode-2 ~]# docker load < images.tarmaster上初始化kubeadmin生成node token:使用kubeadm init 初始化环境,–kubernetes-version指定版本,-pod-network-cidr指定虚拟网络的网段,可以随便指定任何网段![root@k8smaster ~]# kubeadm init –kubernetes-version=v1.13.3 –pod-network-cidr=10.244.0.0/16[init] using Kubernetes version: v1.13.3[preflight] running pre-flight checks [WARNING KubernetesVersion]: kubernetes version is greater than kubeadm version. Please consider to upgrade kubeadm. kubernetes version: 1.13.3. Kubeadm version: 1.11.xI0305 19:49:27.250624 5373 kernel_validator.go:81] Validating kernel versionI0305 19:49:27.250718 5373 kernel_validator.go:96] Validating kernel config[preflight/images] Pulling images required for setting up a Kubernetes cluster[preflight/images] This might take a minute or two, depending on the speed of your internet connection[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[preflight] Activating the kubelet service[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.186.137.114][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated sa key and public key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] Generated etcd/ca certificate and key.[certificates] Generated etcd/server certificate and key.[certificates] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.[certificates] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [9.186.137.114 127.0.0.1 ::1][certificates] Generated etcd/healthcheck-client certificate and key.[certificates] Generated apiserver-etcd-client certificate and key.[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”[init] this might take a minute or longer if the control plane images have to be pulled[apiclient] All control plane components are healthy after 19.502118 seconds[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace[kubelet] Creating a ConfigMap “kubelet-config-1.13” in namespace kube-system with the configuration for the kubelets in the cluster[markmaster] Marking the node k8smaster as master by adding the label “node-role.kubernetes.io/master=’’"[markmaster] Marking the node k8smaster as master by adding the taints [node-role.kubernetes.io/master:NoSchedule][patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8smaster” as an annotation[bootstraptoken] using token: xjzf96.nv0qhqwj9j47r1tv[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root: kubeadm join 9.186.137.114:6443 –token xjzf96.nv0qhqwj9j47r1tv –discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c [root@k8smaster ~]# mkdir -p $HOME/.kube[root@k8smaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8smaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/confignode节点执行kubeadm join加入集群:注意!一定要时间同步,否则就会出问题!报这个错误:[discovery] Failed to request cluster info, will try again: [Get https://9.186.137.114:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]k8snode-1:[root@k8snode-1 ~]# kubeadm join 9.186.137.114:6443 –token xjzf96.nv0qhqwj9j47r1tv –discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c[preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]you can solve this problem with following methods: 1. Run ‘modprobe – ’ to load missing kernel modules;2. Provide the missing builtin kernel ipvs supportI0305 20:02:24.453910 5983 kernel_validator.go:81] Validating kernel versionI0305 20:02:24.454026 5983 kernel_validator.go:96] Validating kernel config[discovery] Trying to connect to API Server “9.186.137.114:6443”[discovery] Created cluster-info discovery client, requesting info from “https://9.186.137.114:6443”[discovery] Requesting info from “https://9.186.137.114:6443” again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “9.186.137.114:6443”[discovery] Successfully established connection with API Server “9.186.137.114:6443”[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.13” ConfigMap in the kube-system namespace[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[preflight] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8snode-1” as an annotationThis node has joined the cluster: Certificate signing request was sent to master and a response was received. The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the master to see this node join the cluster.k8snode-2:[root@k8snode-2 ~]# kubeadm join 9.186.137.114:6443 –token xjzf96.nv0qhqwj9j47r1tv –discovery-token-ca-cert-hash sha256:e386175a5cae597dec6bfeb7c92d01bc5fe052313b50dc48e419057c8c3f824c[preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]you can solve this problem with following methods: 1. Run ‘modprobe – ’ to load missing kernel modules;2. Provide the missing builtin kernel ipvs supportI0305 19:51:39.452856 5036 kernel_validator.go:81] Validating kernel versionI0305 19:51:39.452954 5036 kernel_validator.go:96] Validating kernel config[discovery] Trying to connect to API Server “9.186.137.114:6443”[discovery] Created cluster-info discovery client, requesting info from “https://9.186.137.114:6443”[discovery] Requesting info from “https://9.186.137.114:6443” again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “9.186.137.114:6443”[discovery] Successfully established connection with API Server “9.186.137.114:6443”[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.13” ConfigMap in the kube-system namespace[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[preflight] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8snode-2” as an annotationThis node has joined the cluster: Certificate signing request was sent to master and a response was received.* The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the master to see this node join the cluster.安装Flannel网络:k8smaster:[root@k8smaster ~]# kubectl apply -f kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created查看K8S集群状态:k8smaster:[root@k8smaster ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-86c58d9df4-kmfct 1/1 Running 0 8m26scoredns-86c58d9df4-qn2k2 1/1 Running 0 8m26setcd-k8smaster 1/1 Running 0 8m35skube-apiserver-k8smaster 1/1 Running 1 8m10skube-controller-manager-k8smaster 1/1 Running 0 7m43skube-flannel-ds-amd64-9rmfz 1/1 Running 0 5m9skube-flannel-ds-amd64-vnwtf 1/1 Running 0 12skube-flannel-ds-amd64-x7q4s 1/1 Running 0 51skube-proxy-7zl9n 1/1 Running 0 7m31skube-proxy-t2sx9 1/1 Running 0 8m27skube-proxy-txsfr 1/1 Running 0 7m27skube-scheduler-k8smaster 1/1 Running 0 8m56s到这里kubeadmin就部署完毕k8s集群了!希望大家可以给我指出问题,我们一起前进!谢谢大家!