欢送拜访我的GitHub

https://github.com/zq2599/blog_demos

内容:所有原创文章分类汇总及配套源码,波及Java、Docker、Kubernetes、DevOPS等;

对于kubespray

Kubespray是开源的kubernetes部署工具,整合了ansible,能够不便的部署高可用集群环境,官网地址:https://github.com/kubernetes...

重要前提

本次实战采纳官网举荐的在线装置,因而会去谷歌镜像仓库下载镜像,<font color="red">须要您的网络能够拜访谷歌服务</font>;

机器信息

  • 因为作者太穷,本次实战筹集到共计两台机器,它们的主机名、IP地址和作用形容如下:
主机名IP地址作用操作系统
ansible192.168.50.134ansible主机CentOS7
node1192.168.50.27k8s服务器ubuntu-20.04.1
  • 可见kubernetes是被部署在<font color="blue">ubuntu电脑</font>上;

    标准化设置

    ubuntu电脑要做以下设置:

  • 批改/etc/hostname,设置好主机名
  • 批改/etc/hosts,将本人的主机名和IP地址增加进去
  • 敞开防火墙
ufw disable
  1. 再次查看应该是敞开状态
root@ideapad:~# ufw status状态:不流动
  1. 敞开selinux,如果提醒装置<font color="blue">selinux-utils</font>,示意selinux没有装置,就不必关系了
setenforce 0
  1. ipv4网络设置
modprobe br_netfilterecho '1' > /proc/sys/net/bridge/bridge-nf-call-iptablessysctl -w net.ipv4.ip_forward=1
  1. 立刻禁用替换分区
swapoff -a
  1. 我的电脑上,禁用前的内存状况
root@ideapad:~# free -m              总计         已用        闲暇      共享    缓冲/缓存    可用内存:       31913         551       30288         137        1073       30839替换:        2047           0        2047
  1. 执行了<font color="blue">swapoff -a</font>后再看,可见全副为0了
root@ideapad:~# free -m              总计         已用        闲暇      共享    缓冲/缓存    可用内存:       31913         557       30281         137        1073       30833替换:           0           0           0
  1. 以上禁用替换分区的办法,尽管立刻失效了,然而重启电脑后仍旧复原了替换分区的应用,要彻底禁用,请关上文件<font color="blue">/etc/fstab</font>,在下图红框这一行最后面增加<font color="red">#</font>

    ansible主机免明码ssh登录

  2. ssh登录ansible主机;
  3. 生成ssh公私钥,输出命令<font color="blue">ssh-keygen</font>,而后间断四次回车:
  4. 输出命令<font color="blue">ssh-copy-id root@192.168.50.27</font>,将ansible的ssh分发给ubuntu主机,会要求输出yes和ubuntu主机的root账号的明码,实现输出后,当前ansible就能够免明码ssh登录ubuntu主机了:

    ansible主机操作

  5. ssh登录ansible主机;
  6. 装置ansible利用:
yum install -y epel-release ansible
  1. 装置pip:
easy_install pip
  1. 通过pip装置jinja2:
pip2 install jinja2 --upgrade
  1. 装置python36:
yum install python36 -y
  1. 创立工作目录,进入工作目录:
mkdir /usr/local/kubespray && cd /usr/local/kubespray/
  1. 下载kubespray,我这里下载的是<font color="blue">v2.14.2</font>版本:
wget https://github.com/kubernetes-sigs/kubespray/archive/v2.14.2.tar.gz
  1. 解压:
tar -zxvf v2.14.2.tar.gz
  1. 进入解压后的目录:
cd kubespray-2.14.2/
  1. 装置kubespray所需的利用(留神是<font color="red">pip3</font>):
pip3 install -r requirements.txt
  1. 复制一份demo配置信息到目录<font color="blue">inventory/mycluster</font>:
cp -rfp inventory/sample inventory/mycluster
  1. 进去看一下,可见mycluster目录下复制了很多文件:
[root@kubespray kubespray-2.14.2]# tree inventory/inventory/├── local│   ├── group_vars -> ../sample/group_vars│   └── hosts.ini├── mycluster│   ├── group_vars│   │   ├── all│   │   │   ├── all.yml│   │   │   ├── aws.yml│   │   │   ├── azure.yml│   │   │   ├── containerd.yml│   │   │   ├── coreos.yml│   │   │   ├── docker.yml│   │   │   ├── gcp.yml│   │   │   ├── oci.yml│   │   │   ├── openstack.yml│   │   │   └── vsphere.yml│   │   ├── etcd.yml│   │   └── k8s-cluster│   │       ├── addons.yml│   │       ├── k8s-cluster.yml│   │       ├── k8s-net-calico.yml│   │       ├── k8s-net-canal.yml│   │       ├── k8s-net-cilium.yml│   │       ├── k8s-net-contiv.yml│   │       ├── k8s-net-flannel.yml│   │       ├── k8s-net-kube-router.yml│   │       ├── k8s-net-macvlan.yml│   │       └── k8s-net-weave.yml│   └── inventory.ini└── sample    ├── group_vars    │   ├── all    │   │   ├── all.yml    │   │   ├── aws.yml    │   │   ├── azure.yml    │   │   ├── containerd.yml    │   │   ├── coreos.yml    │   │   ├── docker.yml    │   │   ├── gcp.yml    │   │   ├── oci.yml    │   │   ├── openstack.yml    │   │   └── vsphere.yml    │   ├── etcd.yml    │   └── k8s-cluster    │       ├── addons.yml    │       ├── k8s-cluster.yml    │       ├── k8s-net-calico.yml    │       ├── k8s-net-canal.yml    │       ├── k8s-net-cilium.yml    │       ├── k8s-net-contiv.yml    │       ├── k8s-net-flannel.yml    │       ├── k8s-net-kube-router.yml    │       ├── k8s-net-macvlan.yml    │       └── k8s-net-weave.yml    └── inventory.ini10 directories, 45 files
  1. 设置集群信息(当前目录仍旧是kubespray-2.14.2):
declare -a IPS=(192.168.50.27)
  1. 配置ansible:
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
  1. 此时kubespray的脚本依据输出的IP信息做好了集群布局,具体信息可见<font color="blue">inventory/mycluster/hosts.yml</font>,如下所示,您也能够自行批改此文件:
[root@kubespray kubespray-2.14.2]# cat inventory/mycluster/hosts.ymlall:  hosts:    node1:      ansible_host: 192.168.50.27      ip: 192.168.50.27      access_ip: 192.168.50.27  children:    kube-master:      hosts:        node1:    kube-node:      hosts:        node1:    etcd:      hosts:        node1:    k8s-cluster:      children:        kube-master:        kube-node:    calico-rr:      hosts: {}
  1. 执行以下命令即可开始装置,在线装置比拟耗时请急躁期待:
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
  1. 遇到网络问题失败退出时很常见的事件,此时将上述命令反复执行即可,ansible对于曾经执行过的命令会跳过的;
  2. 装置实现时控制台输入相似如下的信息(太多了,省略了一些):
Saturday 21 November 2020  17:47:18 +0800 (0:00:00.025)       0:30:03.154 ***** Saturday 21 November 2020  17:47:18 +0800 (0:00:00.024)       0:30:03.179 ***** PLAY RECAP **********************************************************************************************************************************************************localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   node1                      : ok=591  changed=95   unreachable=0    failed=0    skipped=1131 rescued=0    ignored=0   Saturday 21 November 2020  17:47:18 +0800 (0:00:00.021)       0:30:03.200 ***** =============================================================================== download : download_file | Download item ------------------------------------------------------------------------------------------------------------------ 1008.61skubernetes/preinstall : Update package management cache (APT) ---------------------------------------------------------------------------------------------- 119.25sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 42.36sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 38.26sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 37.31sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 36.60sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 35.01sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 34.00sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 30.55sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 27.47sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 26.78skubernetes/master : kubeadm | Initialize first master ------------------------------------------------------------------------------------------------------- 25.98sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 23.42sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 22.14sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 21.50sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 20.17sdownload : download_container | Download image if required -------------------------------------------------------------------------------------------------- 17.55scontainer-engine/docker : ensure docker packages are installed ----------------------------------------------------------------------------------------------- 9.73skubernetes/master : Master | wait for kube-scheduler --------------------------------------------------------------------------------------------------------- 7.83skubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template ---------------------------------------------------------------------------------------- 6.93s
  1. 至此,kubernetes集群环境部署实现,接下来简略验证一下环境是否可用;

    查看环境

  2. ssh登录ubuntu机器;
  3. 查看节点、service、pod:
root@node1:~# kubectl get node -o wideNAME    STATUS   ROLES    AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIMEnode1   Ready    master   104m   v1.18.10   192.168.50.27   <none>        Ubuntu 20.04.1 LTS   5.4.0-54-generic   docker://19.3.12root@node1:~# kubectl get node -o wideNAME    STATUS   ROLES    AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIMEnode1   Ready    master   105m   v1.18.10   192.168.50.27   <none>        Ubuntu 20.04.1 LTS   5.4.0-54-generic   docker://19.3.12root@node1:~# kubectl get services --all-namespacesNAMESPACE     NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGEdefault       kubernetes                  ClusterIP   10.233.0.1      <none>        443/TCP                  105mkube-system   coredns                     ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   104mkube-system   dashboard-metrics-scraper   ClusterIP   10.233.12.230   <none>        8000/TCP                 104mkube-system   kubernetes-dashboard        ClusterIP   10.233.61.24    <none>        443/TCP                  104mroot@node1:~# kubectl get pods --all-namespacesNAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGEkube-system   calico-kube-controllers-6ccb68f9b5-kwqck      1/1     Running   0          104mkube-system   calico-node-4lmpf                             1/1     Running   0          104mkube-system   coredns-dff8fc7d-2gnl8                        1/1     Running   0          104mkube-system   coredns-dff8fc7d-4vthn                        0/1     Pending   0          104mkube-system   dns-autoscaler-66498f5c5f-qh4vb               1/1     Running   0          104mkube-system   kube-apiserver-node1                          1/1     Running   0          105mkube-system   kube-controller-manager-node1                 1/1     Running   0          105mkube-system   kube-proxy-kk84b                              1/1     Running   0          105mkube-system   kube-scheduler-node1                          1/1     Running   0          105mkube-system   kubernetes-dashboard-667c4c65f8-8ckf5         1/1     Running   0          104mkube-system   kubernetes-metrics-scraper-54fbb4d595-dk42t   1/1     Running   0          104mkube-system   nodelocaldns-d69h9                            1/1     Running   0          104m
  • 可见一些必须的pod和服务都曾经启动了,接下来试试dashboard是否失常拜访;

    拜访dashboard

    dashboard能够查看kubernetes零碎的整体状况,为了拜访dashboard页面,须要减少RBAC:

  • ssh登录ubuntu机器;
  • 执行以下命令,创立文件<font color="blue">admin-user.yaml</font>:
tee admin-user.yaml <<-'EOF'apiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kube-systemEOF
  1. 执行以下命令,创立文件<font color="blue">admin-user-role.yaml</font>:
tee admin-user-role.yaml <<-'EOF'apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kube-systemEOF
  1. 创立ServiceAccount和ClusterRoleBinding:
kubectl create -f admin-user.yaml && kubectl create -f admin-user-role.yaml
  1. 将<font color="blue">kubernetes-dashboard</font>这个服务的类型从ClusterIP改为NodePort,这样咱们就能从浏览器拜访dashboard了:
kubectl  patch svc kubernetes-dashboard -n kube-system \> -p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
  1. 再看服务,曾经胜利改为<font color="blue">NodePort </font>:
root@node1:~# kubectl get service --all-namespacesNAMESPACE     NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGEdefault       kubernetes                  ClusterIP   10.233.0.1      <none>        443/TCP                  132mkube-system   coredns                     ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP   131mkube-system   dashboard-metrics-scraper   ClusterIP   10.233.12.230   <none>        8000/TCP                 131mkube-system   kubernetes-dashboard        NodePort    10.233.61.24    <none>        443:30443/TCP            131m
  1. 获取token看,用于登录dashboard页面:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  1. 下图红框中就是token的内容:

  1. 当初通过浏览器拜访dashboard页面了,地址是:https://192.168.50.27:30443 ,其中<font color="blue">192.168.50.27</font>是ubuntu机器的IP地址;
  2. 因为不是https协定,因而浏览器可能弹出平安提醒,如下图,抉择<font color="blue">持续返回</font>:

  1. 此时页面会让您抉择登录形式,抉择<font color="blue">令牌</font>并输出后面失去的token,即可登录:

  1. 登录胜利后能够见到零碎信息,如下图:


至此,kubespray-2.14.2装置kubernetes-1.18.10实现,心愿本文能给您一些参考。

你不孤独,欣宸原创一路相伴

  1. Java系列
  2. Spring系列
  3. Docker系列
  4. kubernetes系列
  5. 数据库+中间件系列
  6. DevOps系列

欢送关注公众号:程序员欣宸

微信搜寻「程序员欣宸」,我是欣宸,期待与您一起畅游Java世界...
https://github.com/zq2599/blog_demos