前言
工作中越来越重度应用k8s,想进一步理解k8s的工作原理。一方面学习业界优良零碎设计思路,另一方面多理解也能够进步日常工作效率,比方和k8s开发的沟通效率等。明天第一步:本人着手搭建一个k8s服务。
本文采纳的版本kubectl kubelet kubeadm版本: 1.23.1操作系统版本: CentOS 8.2 64位
筹备工作
1.洽购云主机
官网倡议最低云主机配置2核4G,国内任意云厂商洽购就行,作为K8S服务的宿主机。本教程操作系统为CentOS 8.2 64位。
备注:官网文档标记最低配置内存要求2G,然而装置完dashboard、ingress等服务之后比拟卡顿,所以为了晦涩这里举荐4G内存。
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/e513dc1d52314ef9a89fa15791ef2b38~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
2.放开端口
外网放开30000
端口,后续浏览器登陆k8s dashboard看板应用。并查看ssh服务端口22是否失常开启。
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/bb09fa2abfbf4e949a1ebea84c25ee2b~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
应用ssh登陆云主机,开始配置。
3.装置工具
装置常用工具:
yum install -y yum-utils device-mapper-persistent-data lvm2 iproute-tc
4.增加阿里源
国内存在墙的问题,增加阿里源减速:
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
开始装置
1.装置社区版本docker
装置:
yum -y install docker-ce
enable:
systemctl enable docker
查看docker版本docker version
:
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/8ddf996f4f124168a07d424a4fb80cd3~tplv-k3u1fbpfcp-zoom-1.image" style="width:60%">
</p>
2.装置 kubectl kubelet kubeadm
2.1增加阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
留神点:v1.24版本后kubernetes放弃docker,装置过程存在一些问题,这里咱们指定1.23.1版本装置。
2.2装置 1.23.1版本 kubectl kubelet kubeadm:
yum install -y kubectl-1.23.1 kubelet-1.23.1 kubeadm-1.23.1
启动kubelet:
systemctl enable kubelet
查看kubectl版本:
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/5b9d4ca77a4047aea320cbd0989fa063~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
2.3批改cgroupdriver
执行如下命令:
cat <<EOF > /etc/docker/daemon.json{ "exec-opts": ["native.cgroupdriver=systemd"]}EOF
重启服务:
systemctl daemon-reloadsystemctl restart dockersystemctl restart kubelet
2.4替换镜像源
因为这里咱们应用的是国内的云厂商,拜访海内k8s.gcr.io
拉取镜像存在墙的问题,所以上面咱们就替换成registry.cn-hangzhou.aliyuncs.com/google_containers
的地址,具体操作如下:
删除旧配置文件:
rm -f /etc/containerd/config.toml
生产默认配置文件:
containerd config default > /etc/containerd/config.toml
替换镜像地址:
sed -i 's/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/' /etc/containerd/config.toml
重启containerd
:
systemctl restart containerd
2.4初始化k8s master节点
初始化命令:
kubeadm init --kubernetes-version=1.23.1 \--apiserver-advertise-address=<你的云主机内网IP> \--image-repository registry.aliyuncs.com/google_containers \--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
通常会卡在这一步,如果大家依照本文的版本,实践不会报错,如果报错须要一一搜寻解决了。
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/2b8afe070bda4bf5ba0a5885bec57504~tplv-k3u1fbpfcp-zoom-1.image" style="width:60%">
</p>
如果初始化失败,执行如下命令后再从新初始化:
kubeadm reset -f
初始化胜利之后失去如下命令,退出新的node节点应用(本次不应用):
kubeadm join <你的云主机内网IP>:6443 --token 78376v.rznvls130w3sgwb7 \ --discovery-token-ca-cert-hash sha256:add03fb7de52ad73fd96626fa9d9f0d639186524ba34d24742c15fce8093b8c5
配置kubectl
:
mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config
查看k8s服务启动状态:
kubectl get pod --all-namespaces
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/16174ee7fa614ef58060c24fc701e5e6~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
3.装置calico网络
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
装置结束后,查看calico服务启动状态:
kubectl get pod --all-namespaces
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/1d0549f21a3c44c1a19f4491e7f8e453~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
4.装置kubernates-dashboard
4.1 下载配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
4.2 增加nodeport
配置nodeport,外网拜访dashboard:
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/6ff847d0ac0f43cb9a0972d88fb4281b~tplv-k3u1fbpfcp-zoom-1.image" style="width:50%">
</p>
4.3 创立dashboard服务
创立:
kubectl apply -f recommended.yaml
查看kubernetes-dashboard启动状态:
kubectl get pod -n kubernetes-dashboard
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/dad68e94f9844a1d83143fa596a84a76~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
4.4 外网拜访dashboard
浏览器关上dashboard,地址:<你的外网IP:30000>
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/ac84d1fea51e41cab64f4722c161472d~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
如上图所示,因为https的问题,浏览器会提醒「您的连贯不是私密连贯」。举荐应用chrome浏览器,并在当前页面上任意地位点击,而后键盘输入「thisisunsafe」再点击回车健即可。
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/84d33d34138f4abf9cd3545ebcff4945~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
4.5 获取token
创立用户。dashboard-adminuser.yaml
配置文件示例,执行如下命令间接创立,参考官网教程创立示例用户 https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
:
创立配置文件:
cat <<EOF > dashboard-adminuser.yamlapiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboardEOF
创立用户:
kubectl apply -f dashboard-adminuser.yaml
创立胜利之后提醒:
serviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user created
执行如下命令获取token:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/e989e8eaa1ba4106ad3676e5669358fe~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
4.6 复制token登陆dashboard
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/e82411aa044f44919ac9b29d5ba68fe0~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
到这里咱们曾经能够失常创立pod了,然而外网还不能间接拜访到pod,尽管能够采纳dashboard的nodeport
的计划,然而nodeport
只反对裸露30000-32767的端口,不适用于生产环境,接着咱们就通过另一种形式ingress
来对外裸露pod。
5. 装置ingress
5.1 下载官网配置文件,这里应用的v1.3.1版本:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
5.2 同样因为墙的问题,咱们把配置文件中的镜像源换成阿里源:
替换nginx-ingress-controller
镜像源:
sed -i 's/registry.k8s.io\/ingress-nginx\/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974/registry.cn-hangzhou.aliyuncs.com\/google_containers\/nginx-ingress-controller:v1.3.1/g' ./deploy.yaml
替换kube-webhook-certgen
镜像源:
sed -i 's/registry.k8s.io\/ingress-nginx\/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47/registry.cn-hangzhou.aliyuncs.com\/google_containers\/kube-webhook-certgen:v1.3.0/g' ./deploy.yaml
5.3 创立ingress服务
创立:
kubectl apply -f deploy.yaml
查看状态:
kubectl get pod --all-namespaces
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/ad3ad0710cbb4f4ca0565dcd2835fd53~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
创立实现之后,查看ingress状态,为pending
状态,起因是短少LB,这里咱们应用metallb
。
5.4 装置metallb
执行装置命令:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/3d5dbaaa247c4ccaad4b8d5d7e14769c~tplv-k3u1fbpfcp-zoom-1.image" style="width:90%">
</p>
创立secret:
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
查看装置状态:
kubectl get nskubectl get all -n metallb-system
5.4 绑定外网IP EXTERNAL-IP
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/40d19a1c17c24081a41206223f23585b~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
kubectl edit service ingress-nginx-controller --namespace=ingress-nginx增加:externalIPs: - 118.195.228.232
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/983400a6f66941758cd07d8423de29aa~tplv-k3u1fbpfcp-zoom-1.image" style="width:50%">
</p>
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/fbe7ce843e8141b3b8d788c437bdeb66~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
查看启动状态kubectl get pod --all-namespaces
:
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/ce49b1f235314202abeba5bc22c7aa93~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
metalab和ingress-nginx
的状态还是pending
,查看起因:
kubectl describe pod ingress-nginx-controller-6bfbdbdd64-jp7lw -n ingress-nginx
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/e48c4af8d92948538dfc64f5e37a36af~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
起因是当初只有master
节点,还没有node
节点,未了节省成本,这里咱们容许master参加调度,把master节点也当node应用。
5.5 容许master节点能够被调度
执行:
kubectl taint nodes --all node-role.kubernetes.io/master-
查看pod状态:
kubectl get pod --all-namespaces
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/eeaa43cd38334e08919eb3612fce9b3a~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
pod均失常运行。到这里,一个根底的k8s服务根本装置实现。
体验k8s
解析域名
你的测试域名A解析到服务器的外网IP上,具体步骤略。
创立测试服务pod
kubectl create deployment demo --image=httpd --port=80kubectl expose deployment demo
创立ingress映射
kubectl create ingress demo --class=nginx --rule="k8s.tigerb.cn/*=demo:80"
测试
查看ingress服务service的外网端口
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/3b23725e5eb54f599e7d6b91f99c620c~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
demo
pod启动胜利后拜访http://k8s.tigerb.cn:32374/
测试服务即可。
<p align="center">
<img src="https://p3-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/a1ae0350dd3741f38a29798a452baca0~tplv-k3u1fbpfcp-zoom-1.image" style="width:80%">
</p>
到此为止,咱们就胜利部署了一个k8s服务,应用dashborad就能够很轻松实现服务部署、扩容、缩容等。