装置kubernetes集群
kubernetes的装置过程极其简单,对Linux运维不相熟的状况下装置kubernetes极为艰难,再加上国内无法访问google服务器,咱们装置k8s就更加艰难
kubeasz我的项目(https://github.com/easzlab/kubeasz)极大的简化了k8s集群的装置过程,使咱们能够离线一键装置k8s集群
筹备第一台虚拟机
设置虚拟机cpu
上传离线安装文件
- 将
ansible
目录上传到/etc/
目录下 - 将
easzup
上传到/root
目录下
筹备离线装置环境
在CentOS7虚拟机中执行上面操作
cd ~/# 下载 kubeasz 的自动化装置脚本文件: easzup,如果曾经上传过此文件,则不用执行这一步export release=2.2.0curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup# 对easzup文件设置执行权限chmod +x ./easzup# 下载离线安装文件,并装置配置docker,# 如果离线文件曾经存在则不会反复下载,# 离线安装文件寄存门路: /etc/ansible./easzup -D# 启动kubeasz工具应用的长期容器./easzup -S# 进入该容器docker exec -it kubeasz sh# 上面命令在容器内执行# 配置离线装置cd /etc/ansiblesed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/chrony/defaults/main.ymlsed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/ex-lb/defaults/main.ymlsed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/kube-node/defaults/main.ymlsed -i 's/^INSTALL_SOURCE.*$/INSTALL_SOURCE: "offline"/g' roles/prepare/defaults/main.ymlexit# 装置 python,已装置则疏忽这一步yum install python -y
导入镜像
为了节省时间,前面课程中应用的docker镜像不必再花工夫从网络下载
将课前材料中 images.gz 中的镜像导入 docker
docker load -i images.gz
筹备三台服务器
筹备三台服务器,一台master,两台工作节点,他们的ip地址能够用任意的地址,最好设置为固定ip
上面测试中应用的ip为:
- 192.168.64.191
- 192.168.64.192
- 192.168.64.193
从第一台虚拟机克隆两台虚拟机
这三台虚拟机,第一台虚拟机作为master,另两台作为工作节点
在master上持续配置装置环境
# 装置pip,已装置则疏忽这一步wget -O /etc/yum.repos.d/epel-7.repo https://mirrors.aliyun.com/repo/epel-7.repoyum install git python-pip -y# pip装置ansible(国内如果装置太慢能够间接用pip阿里云减速),已装置则疏忽这一步pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/pip install ansible==2.6.12 netaddr==0.7.19 -i https://mirrors.aliyun.com/pypi/simple/# 在ansible管制端配置免明码登陆其余节点服务器ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519# 公钥复制到所有节点,包含master本人# 按提醒输出yes和root管理员的明码ssh-copy-id 192.168.64.191ssh-copy-id 192.168.64.192ssh-copy-id 192.168.64.193
配置集群服务器的ip
cd /etc/ansible && cp example/hosts.multi-node hosts && vim hosts
如果内存无限, 能够只部署两台服务器进行测试
- 主服务器既作为管制节点, 又作为工作节点
- 缩小etcd服务数量
# 查看集群主机状态ansible all -m ping
一键装置k8s集群
装置步骤十分多,工夫较长,急躁期待装置实现
cd /etc/ansibleansible-playbook 90.setup.yml
装置胜利后果:
设置kubectl命令别名
# 设置 kubectl 命令别名 kecho "alias k='kubectl'" >> ~/.bashrc# 使设置失效source ~/.bashrc
配置主动补全
yum install -y bash-completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrcsource ~/.bashrc
验证装置
k get cs---------------------------------------------------------NAME STATUS MESSAGE ERRORetcd-1 Healthy {"health":"true"} scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} k get node---------------------------------------------------------------------NAME STATUS ROLES AGE VERSION192.168.64.191 Ready,SchedulingDisabled master 5d23h v1.15.2192.168.64.192 Ready node 5d23h v1.15.2192.168.64.193 Ready node 5d23h v1.15.2
初步尝试 kubernetes
kubectl run 命令是最简略的部署援用的形式,它主动创立必要组件,这样,咱们就先不用深刻理解每个组件的构造
应用 ReplicationController 和 pod 部署利用
Pod是用来封装Docker容器的对象,它具备本人的虚拟环境(端口, 环境变量等),一个Pod能够封装多个Docker容器.
RC是用来自动控制Pod部署的工具,它能够主动启停Pod,对Pod进行主动伸缩.
上面咱们用命令部署一个RC
cd ~/k run --image=luksa/kubia --port=8080 --generator=run/v1 kubiak get rc---------------------------------------NAME DESIRED CURRENT READY AGEkubia 1 1 1 24sk get pods----------------------------------------------NAME READY STATUS RESTARTS AGEkubia-9z6kt 1/1 Running 0 28s
kubectl run 几个参数的含意
--image=luksa/kubia
- 镜像名称
--port=8080
- pod 对外裸露的端口
--generator=run/v1 kubia
- 创立一个ReplicationController
应用 service 对外裸露 pod
k expose rc kubia --type=NodePort --name kubia-httpk get svc------------------------------------------------------------------------------NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubia-http NodePort 10.68.194.195 <none> 8080:20916/TCP 4s
这里创立了一个 service 组件,用来对外裸露pod拜访,在所有节点服务器上,裸露了20916端口,通过此端口,能够拜访指定pod的8080端口
拜访以下节点服务器的20916端口,都能够拜访该利用
留神: 要把端口批改成你生成的随机端口
- http://192.168.64.191:20916/
- http://192.168.64.192:20916/
- http://192.168.64.193:20916/
pod主动伸缩
k8s对利用部署节点的主动伸缩能力十分强,只须要指定须要运行多少个pod,k8s就能够实现pod的主动伸缩
# 将pod数量减少到3个k scale rc kubia --replicas=3k get po -o wide----------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkubia-q7bg5 1/1 Running 0 10s 172.20.3.29 192.168.64.193 <none> <none>kubia-qkcqh 1/1 Running 0 10s 172.20.2.30 192.168.64.192 <none> <none>kubia-zlmsn 1/1 Running 0 16m 172.20.3.28 192.168.64.193 <none> <none># 将pod数量缩小到1个k scale rc kubia --replicas=1# k8s会主动进行两个pod,最终pod列表中会只有一个podk get po -o wide---------------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkubia-q7bg5 1/1 Terminating 0 6m1s 172.20.3.29 192.168.64.193 <none> <none>kubia-qkcqh 1/1 Terminating 0 6m1s 172.20.2.30 192.168.64.192 <none> <none>kubia-zlmsn 1/1 Running 0 22m 172.20.3.28 192.168.64.193 <none
pod
应用部署文件手动部署pod
创立kubia-manual.yml
部署文件
cat <<EOF > kubia-manual.yml apiVersion: v1 # k8s api版本kind: Pod # 该部署文件用来创立pod资源metadata: name: kubia-manual # pod名称前缀,前面会追加随机字符串spec: containers: # 对pod中容器的配置 - image: luksa/kubia # 镜像名 imagePullPolicy: Never name: kubia # 容器名 ports: - containerPort: 8080 # 容器裸露的端口 protocol: TCPEOF
应用部署文件创建pod
k create -f kubia-manual.ymlk get po-----------------------------------------------NAME READY STATUS RESTARTS AGEkubia-manual 1/1 Running 0 19s
查看pod的部署文件
# 查看pod的部署文件k get po kubia-manual -o yaml
查看pod日志
k logs kubia-manual
pod端口转发
应用 kubectl port-forward 命令设置端口转发,对外裸露pod.
应用服务器的 8888 端口,映射到 pod 的 8080 端口
k port-forward kubia-manual --address localhost,192.168.64.191 8888:8080# 或在所有网卡上裸露8888端口k port-forward kubia-manual --address 0.0.0.0 8888:8080
在浏览器中拜访 http://192.168.64.191:8888/
pod 标签
能够为 pod 指定标签,通过标签能够对 pod 进行分组治理
ReplicationController,ReplicationSet,Service中,都能够通过 Label 来分组治理 pod
创立pod时指定标签
通过kubia-manual-with-labels.yml
部署文件部署pod
在部署文件中为pod设置了两个自定义标签:creation_method
和env
cat <<EOF > kubia-manual-with-labels.ymlapiVersion: v1 # api版本kind: Pod # 部署的资源类型metadata: name: kubia-manual-v2 # pod名 labels: # 标签设置,键值对模式 creation_method: manual env: prodspec: containers: # 容器设置 - image: luksa/kubia # 镜像 name: kubia # 容器命名 imagePullPolicy: Never ports: # 容器裸露的端口 - containerPort: 8080 protocol: TCPEOF
应用部署文件创建资源
k create -f kubia-manual-with-labels.yml
查看pod的标签
列出所有的pod,并显示pod的标签
k get po --show-labels------------------------------------------------------------NAME READY STATUS RESTARTS AGE LABELSkubia-5rz9h 1/1 Running 0 109s run=kubiakubia-manual 1/1 Running 0 52s <none>kubia-manual-v2 1/1 Running 0 10s creation_method=manual,env=prod
以列的模式列出pod的标签
k get po -L creation_method,env-----------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-5rz9h 1/1 Running 0 4m19s kubia-manual 1/1 Running 0 3m22s kubia-manual-v2 1/1 Running 0 2m40s manual prod
批改pod的标签
pod kubia-manual-v2
的env标签值是prod
, 咱们把这个标签的值批改为 debug
批改一个标签的值时,必须指定 --overwrite
参数,目标是避免误批改
k label po kubia-manual-v2 env=debug --overwritek get po -L creation_method,env---------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-5rz9h 1/1 Running 0 15m kubia-manual 1/1 Running 0 14m kubia-manual-v2 1/1 Running 0 13m manual debug
为pod kubia-manual
设置标签
k label po kubia-manual creation_method=manual env=debug
为pod kubia-5rz9h
设置标签
k label po kubia-5rz9h env=debug
查看标签设置的后果
k get po -L creation_method,env--------------------------------------------------------------------------AME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-5rz9h 1/1 Running 0 18m debugkubia-manual 1/1 Running 0 17m manual debugkubia-manual-v2 1/1 Running 0 16m manual debug
应用标签来查问 pod
查问 creation_method=manual
的pod
# -l 查问k get po -l creation_method=manual -L creation_method,env---------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-manual 1/1 Running 0 28m manual debugkubia-manual-v2 1/1 Running 0 27m manual debug
查问有 env 标签的 pod
# -l 查问k get po -l env -L creation_method,env---------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-5rz9h 1/1 Running 0 31m debugkubia-manual 1/1 Running 0 30m manual debugkubia-manual-v2 1/1 Running 0 29m manual debug
查问 creation_method=manual
并且 env=debug
的 pod
# -l 查问k get po -l creation_method=manual,env=debug -L creation_method,env---------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-manual 1/1 Running 0 33m manual debugkubia-manual-v2 1/1 Running 0 32m manual debug
查问不存在 creation_method 标签的 pod
# -l 查问k get po -l '!creation_method' -L creation_method,env-----------------------------------------------------------------------NAME READY STATUS RESTARTS AGE CREATION_METHOD ENVkubia-5rz9h 1/1 Running 0 36m debug
其余查问举例:
creation_method!=manual
env in (prod,debug)
env notin (prod,debug)
把pod部署到指定的节点服务器
咱们不能间接指定服务器的地址来束缚pod部署的节点
通过为node设置标签,在部署pod时,应用节点选择器,来抉择把pod部署到匹配的节点服务器
上面为名称为192.168.64.193
的节点服务器,增加标签gpu=true
k label node 192.168.64.193 gpu=truek get node -l gpu=true -L gpu------------------------------------------------------NAME STATUS ROLES AGE VERSION GPU192.168.64.193 Ready node 14d v1.15.2 true
部署文件,其中节点选择器nodeSelector
设置了通过标签gpu=true
来抉择节点
cat <<EOF > kubia-gpu.ymlapiVersion: v1kind: Podmetadata: name: kubia-gpu # pod名spec: nodeSelector: # 节点选择器,把pod部署到匹配的节点 gpu: "true" # 通过标签 gpu=true 来抉择匹配的节点 containers: # 容器配置 - image: luksa/kubia # 镜像 name: kubia # 容器名 imagePullPolicy: NeverEOF
创立pod kubia-gpu
,并查看pod的部署节点
k create -f kubia-gpu.ymlk get po -o wide----------------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkubia-5rz9h 1/1 Running 0 3m13s 172.20.2.35 192.168.64.192 <none> <none>kubia-gpu 1/1 Running 0 8m7s 172.20.3.35 192.168.64.193 <none> <none>kubia-manual 1/1 Running 0 58m 172.20.3.33 192.168.64.193 <none> <none>kubia-manual-v2 1/1 Running 0 57m 172.20.3.34 192.168.64.193
查看pod kubia-gpu
的形容
k describe po kubia-gpu------------------------------------------------Name: kubia-gpuNamespace: defaultPriority: 0Node: 192.168.64.193/192.168.64.193......
pod 注解
能够为资源增加注解
注解不能被选择器应用
# 注解k annotate pod kubia-manual tedu.cn/shuoming="foo bar"k describe po kubia-manual
namespace
能够应用命名空间对资源进行组织治理
不同命名空间的资源并不齐全隔离,它们之间能够通过网络相互拜访
查看命名空间
# namespacek get nsk get po --namespace kube-systemk get po -n kube-system
创立命名空间
新建部署文件custom-namespace.yml
,创立命名空间,命名为custom-namespace
cat <<EOF > custom-namespace.ymlapiVersion: v1kind: Namespacemetadata: name: custom-namespaceEOF
# 创立命名空间k create -f custom-namespace.yml k get ns--------------------------------NAME STATUS AGEcustom-namespace Active 2sdefault Active 6dkube-node-lease Active 6dkube-public Active 6dkube-system Active 6d
将pod部署到指定的命名空间中
创立pod,并将其部署到命名空间custom-namespace
# 创立 Pod 时指定命名空间k create -f kubia-manual.yml -n custom-namespace# 默认拜访default命名空间,默认命名空间中不存在pod kubia-manualk get po kubia-manual# 拜访custom-namespace命名空间中的podk get po kubia-manual -n custom-namespace----------------------------------------------------------NAME READY STATUS RESTARTS AGEkubia-manual 0/1 ContainerCreating 0 59s
删除资源
# 按名称删除, 能够指定多个名称# 例如: k delete po po1 po2 po3k delete po kubia-gpu# 按标签删除k delete po -l creation_method=manual# 删除命名空间和其中所有的podk delete ns custom-namespace# 删除以后命名空间中所有podk delete po --all# 因为有ReplicationController,所以会主动创立新的pod[root@master1 ~]# k get poNAME READY STATUS RESTARTS AGEkubia-m6k4d 1/1 Running 0 2m20skubia-rkm58 1/1 Running 0 2m15skubia-v4cmh 1/1 Running 0 2m15s# 删除工作空间中所有类型中的所有资源# 这个操作会删除一个零碎Service kubernetes,它被删除后会立刻被主动重建k delete all --all
存活探针
有三种存活探针:
- HTTP GET
返回 2xx 或 3xx 响应码则认为探测胜利 - TCP
与指定端口建设 TCP 连贯,连贯胜利则为胜利 - Exec
在容器内执行任意的指定命令,并查看命令的退出码,退出码为0则为探测胜利
HTTP GET 存活探针
luksa/kubia-unhealthy 镜像
在kubia-unhealthy镜像中,应用程序作了这样的设定: 从第6次申请开始会返回500错
在部署文件中,咱们增加探针,来探测容器的衰弱状态.
探针默认每10秒探测一次,间断三次探测失败后重启容器
cat <<EOF > kubia-liveness-probe.ymlapiVersion: v1kind: Podmetadata: name: kubia-liveness # pod名称spec: containers: - image: luksa/kubia-unhealthy # 镜像 name: kubia # 容器名 imagePullPolicy: Never livenessProbe: # 存活探针配置 httpGet: # HTTP GET 类型的存活探针 path: / # 探测门路 port: 8080 # 探测端口EOF
创立 pod
k create -f kubia-liveness-probe.yml# pod的RESTARTS属性,每过1分半种就会加1k get po kubia-liveness--------------------------------------------------NAME READY STATUS RESTARTS AGEkubia-liveness 1/1 Running 0 5m25s
查看上一个pod的日志,前5次探测是正确状态,前面3次探测是失败的,则该pod会被删除
k logs kubia-liveness --previous-----------------------------------------Kubia server starting...Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1Received request from ::ffff:172.20.3.1
查看pod形容
k describe po kubia-liveness---------------------------------...... Restart Count: 6 Liveness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3......
delay
0示意容器启动后立刻开始探测timeout
1示意必须在1秒内响应,否则视为探测失败period
10s示意每10秒探测一次failure
3示意间断3次失败后重启容器
通过设置 delay 延迟时间,能够防止在容器内利用没有齐全启动的状况下就开始探测
cat <<EOF > kubia-liveness-probe-initial-delay.ymlapiVersion: v1kind: Podmetadata: name: kubia-livenessspec: containers: - image: luksa/kubia-unhealthy name: kubia imagePullPolicy: Never livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 15 # 第一次探测的延迟时间EOF
控制器
ReplicationController
RC能够自动化保护多个pod,只需指定pod正本的数量,就能够轻松实现主动扩容缩容
当一个pod宕机,RC能够主动敞开pod,并启动一个新的pod代替它
上面是一个RC的部署文件,设置启动三个kubia容器:
cat <<EOF > kubia-rc.ymlapiVersion: v1kind: ReplicationController # 资源类型metadata: name: kubia # 为RC命名spec: replicas: 3 # pod正本的数量 selector: # 选择器,用来抉择RC治理的pod app: kubia # 抉择标签'app=kubia'的pod,由以后RC进行治理 template: # pod模板,用来创立新的pod metadata: labels: app: kubia # 指定pod的标签 spec: containers: # 容器配置 - name: kubia # 容器名 image: luksa/kubia # 镜像 imagePullPolicy: Never ports: - containerPort: 8080 # 容器裸露的端口EOF
创立RC
RC创立后,会依据指定的pod数量3,主动创立3个pod
k create -f kubia-rc.ymlk get rc----------------------------------------NAME DESIRED CURRENT READY AGEkubia 3 3 2 2m11sk get po -o wide------------------------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkubia-fmtkw 1/1 Running 0 9m2s 172.20.1.7 192.168.64.192 <none> <none>kubia-lc5qv 1/1 Running 0 9m3s 172.20.1.8 192.168.64.192 <none> <none>kubia-pjs9n 1/1 Running 0 9m2s 172.20.2.11 192.168.64
RC是通过指定的标签app=kubia
对匹配的pod进行治理的
容许在pod上增加任何其余标签,而不会影响pod与RC的关联关系
k label pod kubia-fmtkw type=specialk get po --show-labels----------------------------------------------------------------------NAME READY STATUS RESTARTS AGE LABELSkubia-fmtkw 1/1 Running 0 6h31m app=kubia,type=specialkubia-lc5qv 1/1 Running 0 6h31m app=kubiakubia-pjs9n 1/1 Running 0 6h31m app=kubia
然而,如果扭转pod的app标签的值,就会使这个pod脱离RC的治理,这样RC会认为这里少了一个pod,那么它会立刻创立一个新的pod,来满足咱们设置的3个pod的要求
k label pod kubia-fmtkw app=foo --overwritek get pods -L app-------------------------------------------------------------------NAME READY STATUS RESTARTS AGE APPkubia-fmtkw 1/1 Running 0 6h36m fookubia-lc5qv 1/1 Running 0 6h36m kubiakubia-lhj4q 0/1 Pending 0 6s kubiakubia-pjs9n 1/1 Running 0 6h36m kubia
批改 pod 模板
pod模板批改后,只影响后续新建的pod,已创立的pod不会被批改
能够删除旧的pod,用新的pod来代替
# 编辑 ReplicationController,增加一个新的标签: foo=bark edit rc kubia------------------------------------------------......spec: replicas: 3 selector: app: kubia template: metadata: creationTimestamp: null labels: app: kubia foo: bar # 任意增加一标签 spec:......# 之前pod的标签没有扭转k get pods --show-labels----------------------------------------------------------------------NAME READY STATUS RESTARTS AGE LABELSkubia-lc5qv 1/1 Running 0 3d5h app=kubiakubia-lhj4q 1/1 Running 0 2d22h app=kubiakubia-pjs9n 1/1 Running 0 3d5h app=kubia# 通过RC,把pod扩容到6个# 能够应用后面用过的scale命令来扩容# k scale rc kubia --replicas=6# 或者,能够编辑批改RC的replicas属性,批改成6k edit rc kubia---------------------spec: replicas: 6 # 从3批改成6,扩容到6个pod selector: app: kubia# 新减少的pod有新的标签,而旧的pod没有新标签k get pods --show-labels----------------------------------------------------------------------NAME READY STATUS RESTARTS AGE LABELSkubia-8d9jj 0/1 Pending 0 2m23s app=kubia,foo=barkubia-lc5qv 1/1 Running 0 3d5h app=kubiakubia-lhj4q 1/1 Running 0 2d22h app=kubiakubia-pjs9n 1/1 Running 0 3d5h app=kubiakubia-wb8sv 0/1 Pending 0 2m17s app=kubia,foo=barkubia-xp4jv 0/1 Pending 0 2m17s app=kubia,foo=bar# 删除 rc, 但不级联删除 pod, 使 pod 处于脱管状态k delete rc kubia --cascade=false
ReplicaSet
ReplicaSet 被设计用来代替 ReplicationController,它提供了更丰盛的pod抉择性能
当前咱们总应该应用 RS, 而不实用 RC, 但在旧零碎中仍会应用 RC
cat <<EOF > kubia-replicaset.ymlapiVersion: apps/v1 # RS 是 apps/v1中提供的资源类型kind: ReplicaSet # 资源类型metadata: name: kubia # RS 命名为 kubiaspec: replicas: 3 # pod 正本数量 selector: matchLabels: # 应用 label 选择器 app: kubia # 选取标签是 "app=kubia" 的pod template: metadata: labels: app: kubia # 为创立的pod增加标签 "app=kubia" spec: containers: - name: kubia # 容器名 image: luksa/kubia # 镜像 imagePullPolicy: NeverEOF
创立 ReplicaSet
k create -f kubia-replicaset.yml# 之前脱离治理的pod被RS治理# 设置的pod数量是3,多出的pod会被敞开k get rs----------------------------------------NAME DESIRED CURRENT READY AGEkubia 3 3 3 4s# 多出的3个pod会被敞开k get pods --show-labels----------------------------------------------------------------------NAME READY STATUS RESTARTS AGE LABELSkubia-8d9jj 1/1 Pending 0 2m23s app=kubia,foo=barkubia-lc5qv 1/1 Terminating 0 3d5h app=kubiakubia-lhj4q 1/1 Terminating 0 2d22h app=kubiakubia-pjs9n 1/1 Running 0 3d5h app=kubiakubia-wb8sv 1/1 Pending 0 2m17s app=kubia,foo=barkubia-xp4jv 1/1 Terminating 0 2m17s app=kubia,foo=bar# 查看RS形容, 与RC简直雷同k describe rs kubia
应用更弱小的标签选择器
cat <<EOF > kubia-replicaset.ymlapiVersion: apps/v1kind: ReplicaSetmetadata: name: kubiaspec: replicas: 4 selector: matchExpressions: # 表达式匹配选择器 - key: app # label 名是 app operator: In # in 运算符 values: # label 值列表 - kubia - foo template: metadata: labels: app: kubia spec: containers: - name: kubia image: luksa/kubia imagePullPolicy: NeverEOF
# 先删除现有 RSk delete rs kubia --cascade=false# 再创立 RSk create -f kubia-replicaset.yml# 查看rsk get rs# 查看podk get po --show-labels
可应用的运算符:
In
: label与其中一个值匹配NotIn
: label与任何一个值都不匹配Exists
: 蕴含指定label名称(值任意)DoesNotExists
: 不蕴含指定的label
清理
k delete rs kubiak get rsk get po
DaemonSet
在每个节点上运行一个 pod,例如资源监控,kube-proxy等
DaemonSet不指定pod数量,它会在每个节点上部署一个pod
cat <<EOF > ssd-monitor-daemonset.ymlapiVersion: apps/v1kind: DaemonSet # 资源类型metadata: name: ssd-monitor # DS资源命名spec: selector: matchLabels: # 标签匹配器 app: ssd-monitor # 匹配的标签 template: metadata: labels: app: ssd-monitor # 创立pod时,增加标签 spec: containers: # 容器配置 - name: main # 容器命名 image: luksa/ssd-monitor # 镜像 imagePullPolicy: NeverEOF
创立 DS
DS 创立后,会在所有节点上创立pod,包含master
k create -f ssd-monitor-daemonset.ymlk get po -o wide-------------------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESssd-monitor-g7fjb 1/1 Running 0 57m 172.20.1.12 192.168.64.192 <none> <none>ssd-monitor-qk6t5 1/1 Running 0 57m 172.20.2.14 192.168.64.193 <none> <none>ssd-monitor-xxbq8 1/1 Running 0 57m 172.20.0.2 192.168.64.191 <n
能够在所有选定的节点上部署pod
通过节点的label来抉择节点
cat <<EOF > ssd-monitor-daemonset.ymlapiVersion: apps/v1kind: DaemonSet metadata: name: ssd-monitor spec: selector: matchLabels: app: ssd-monitor template: metadata: labels: app: ssd-monitor spec: nodeSelector: # 节点选择器 disk: ssd # 抉择的节点上具备标签: 'disk=ssd' containers: - name: main image: luksa/ssd-monitor imagePullPolicy: NeverEOF
# 先清理k delete ds ssd-monitor# 再从新创立k create -f ssd-monitor-daemonset.yml
查看 DS 和 pod, 看到并没有创立pod,这是因为不存在具备disk=ssd
标签的节点
k get dsk get po
为节点’192.168.64.192’设置标签 disk=ssd
这样 DS 会在该节点上立刻创立 pod
k label node 192.168.64.192 disk=ssdk get ds---------------------------------------------------------------------------------------NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEssd-monitor 1 1 0 1 0 disk=ssd 37mk get po -o wide----------------------------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESssd-monitor-n6d45 1/1 Running 0 16s 172.20.1.13 192.168.64.192 <none> <none>
同样,进一步测试,为节点’192.168.64.193’设置标签 disk=ssd
k label node 192.168.64.193 disk=ssdk get dsk get po -o wide
删除’192.168.64.193’节点上的disk
标签,那么该节点中部署的pod会被立刻销毁
# 留神删除格局: disk-k label node 192.168.64.193 disk-k get dsk get po -o wide
清理
k delete ds ssd-monitor
Job
Job 用来运行单个工作,工作完结后pod不再重启
cat <<EOF > exporter.ymlapiVersion: batch/v1 # Job资源在batch/v1版本中提供kind: Job # 资源类型metadata: name: batch-job # 资源命名spec: template: metadata: labels: app: batch-job # pod容器标签 spec: restartPolicy: OnFailure # 工作失败时重启 containers: - name: main # 容器名 image: luksa/batch-job # 镜像 imagePullPolicy: NeverEOF
创立 job
镜像 batch-job 中的过程,运行120秒后会主动退出
k create -f exporter.ymlk get job-----------------------------------------NAME COMPLETIONS DURATION AGEbatch-job 0/1 7sk get po-------------------------------------------------------------NAME READY STATUS RESTARTS AGEbatch-job-q97zf 0/1 ContainerCreating 0 7s
期待两分钟后,pod中执行的工作退出,再查看job和pod
k get job-----------------------------------------NAME COMPLETIONS DURATION AGEbatch-job 1/1 2m5s 2m16sk get po-----------------------------------------------------NAME READY STATUS RESTARTS AGEbatch-job-q97zf 0/1 Completed 0 2m20s
应用Job让pod间断运行5次
先创立第一个pod,等第一个实现后后,再创立第二个pod,以此类推,共程序实现5个pod
cat <<EOF > multi-completion-batch-job.ymlapiVersion: batch/v1kind: Jobmetadata: name: multi-completion-batch-jobspec: completions: 5 # 指定残缺的数量 template: metadata: labels: app: batch-job spec: restartPolicy: OnFailure containers: - name: main image: luksa/batch-job imagePullPolicy: NeverEOF
k create -f multi-completion-batch-job.yml
共实现5个pod,并每次能够同时启动两个pod
cat <<EOF > multi-completion-parallel-batch-job.ymlapiVersion: batch/v1kind: Jobmetadata: name: multi-completion-parallel-batch-jobspec: completions: 5 # 共实现5个 parallelism: 2 # 能够有两个pod同时执行 template: metadata: labels: app: batch-job spec: restartPolicy: OnFailure containers: - name: main image: luksa/batch-job imagePullPolicy: NeverEOF
k create -f multi-completion-parallel-batch-job.yml
Cronjob
定时和反复执行的工作
cron时间表格局:"分钟 小时 每月的第几天 月 星期几"
cat <<EOF > cronjob.ymlapiVersion: batch/v1beta1 # api版本kind: CronJob # 资源类型metadata: name: batch-job-every-fifteen-minutesspec: # 0,15,30,45 - 分钟 # 第一个* - 每个小时 # 第二个* - 每月的每一天 # 第三个* - 每月 # 第四个* - 每一周中的每一天 schedule: "0,15,30,45 * * * *" jobTemplate: spec: template: metadata: labels: app: periodic-batch-job spec: restartPolicy: OnFailure containers: - name: main image: luksa/batch-job imagePullPolicy: NeverEOF
创立cronjob
k create -f cronjob.yml# 立刻查看 cronjob,此时还没有创立podk get cj----------------------------------------------------------------------------------------------NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGEbatch-job-every-fifteen-minutes 0,15,30,45 * * * * False 1 27s 2m17s# 到0,15,30,45分钟时,会创立一个podk get po--------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGEbatch-job-every-fifteen-minutes-1567649700-vlmdw 1/1 Running 0 36s
Service
通过Service资源,为多个pod提供一个繁多不变的接入地址
cat <<EOF > kubia-svc.ymlapiVersion: v1kind: Service # 资源类型metadata: name: kubia # 资源命名spec: ports: - port: 80 # Service向外裸露的端口 targetPort: 8080 # 容器的端口 selector: app: kubia # 通过标签,抉择名为kubia的所有podEOF
k create -f kubia-svc.ymlk get svc--------------------------------------------------------------------NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.68.0.1 <none> 443/TCP 2d11hkubia ClusterIP 10.68.163.98 <none> 80/TCP 5s
如果没有pod具备app:kubia标签,能够创立后面的ReplicaSet资源,并让RS主动创立pod
k create -f kubia-replicaset.yml
从外部网络拜访Service
执行curl http://10.68.163.98
来拜访Service
执行屡次会看到,Service会在多个pod中轮训发送申请
curl http://10.68.163.98# [root@localhost ~]# curl http://10.68.163.98# You've hit kubia-xdj86# [root@localhost ~]# curl http://10.68.163.98# You've hit kubia-xmtq2# [root@localhost ~]# curl http://10.68.163.98# You've hit kubia-5zm2q# [root@localhost ~]# curl http://10.68.163.98# You've hit kubia-xdj86# [root@localhost ~]# curl http://10.68.163.98# You've hit kubia-xmtq2
回话亲和性
来自同一个客户端的申请,总是发给同一个pod
cat <<EOF > kubia-svc-clientip.ymlapiVersion: v1kind: Servicemetadata: name: kubia-clientipspec: sessionAffinity: ClientIP # 回话亲和性应用ClientIP ports: - port: 80 targetPort: 8080 selector: app: kubiaEOF
k create -f kubia-svc-clientip.ymlk get svc------------------------------------------------------------------------NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.68.0.1 <none> 443/TCP 2d12hkubia ClusterIP 10.68.163.98 <none> 80/TCP 38mkubia-clientip ClusterIP 10.68.72.120 <none> 80/TCP 2m15s# 进入kubia-5zm2q容器,向Service发送申请# 执行屡次会看到,每次申请的都是同一个podcurl http://10.68.72.120
在pod中,能够通过一个环境变量来获知Service的ip地址
该环境变量在旧的pod中是不存在的,咱们须要先删除旧的pod,用新的pod来代替
k delete po --allk get po-----------------------------------------------NAME READY STATUS RESTARTS AGEkubia-k66lz 1/1 Running 0 64skubia-vfcqv 1/1 Running 0 63skubia-z257h 1/1 Running 0 63s
k exec kubia-k66lz env------------------------------------------------------------------PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=kubia-k66lzKUBIA_SERVICE_PORT=80 # kubia服务的端口KUBIA_PORT=tcp://10.68.163.98:80KUBIA_CLIENTIP_SERVICE_PORT=80 # kubia-clientip服务的端口KUBIA_CLIENTIP_PORT_80_TCP=tcp://10.68.72.120:80KUBIA_CLIENTIP_PORT_80_TCP_PROTO=tcpKUBERNETES_SERVICE_HOST=10.68.0.1KUBERNETES_PORT_443_TCP=tcp://10.68.0.1:443KUBIA_SERVICE_HOST=10.68.163.98 # kubia服务的ipKUBIA_CLIENTIP_SERVICE_HOST=10.68.72.120 # kubia-clientip服务的ip......
通过全限定域名
来拜访Service
# 进入一个容器k exec -it kubia-k66lz bashping kubiacurl http://kubiacurl http://kubia.defaultcurl http://kubia.default.svc.cluster.local
endpoint
endpoint是在Service和pod之间的一种资源
一个endpoint资源,蕴含一组pod的地址列表
# 查看kubia服务的endpointk describe svc kubia------------------------------......Endpoints: 172.20.2.40:8080,172.20.3.57:8080,172.20.3.58:8080......# 查看所有endpointk get ep--------------------------------------------------------------------------NAME ENDPOINTS AGEkubia 172.20.2.40:8080,172.20.3.57:8080,172.20.3.58:8080 95mkubia-clientip 172.20.2.40:8080,172.20.3.57:8080,172.20.3.58:8080 59m# 查看名为kubia的endpointk get ep kubia
不含pod选择器的服务,不会创立 endpoint
cat <<EOF > external-service.ymlapiVersion: v1kind: Servicemetadata: name: external-service # Service命名spec: ports: - port: 80EOF
# 创立没有选择器的 Service,不会创立Endpointk create -f external-service.yml# 查看Servicek get svc# 通过外部网络ip拜访Service,没有Endpoint地址列表,会回绝连贯curl http://10.68.191.212
创立endpoint关联到Service,它的名字必须与Service同名
cat <<EOF > external-service-endpoints.ymlapiVersion: v1kind: Endpoints # 资源类型metadata: name: external-service # 名称要与Service名相匹配subsets:- addresses: # 蕴含的地址列表 - ip: 120.52.99.224 # 中国联通的ip地址 - ip: 117.136.190.162 # 中国移动的ip地址 ports: - port: 80 # 指标服务的的端口EOF
# 创立Endpointk create -f external-service-endpoints.yml
# 进入一个pod容器k exec -it kubia-k66lz bash# 拜访 external-service# 屡次拜访,会在endpoints地址列表中轮训申请curl http://external-service
通过齐全限定域名
拜访内部服务
cat <<EOF > external-service-externalname.ymlapiVersion: v1kind: Servicemetadata: name: external-service-externalnamespec: type: ExternalName externalName: www.chinaunicom.com.cn # 域名 ports: - port: 80EOF
创立服务
k create -f external-service-externalname.yml# 进入一个容器k exec -it kubia-k66lz bash# 拜访 external-service-externalnamecurl http://external-service-externalname
服务裸露给客户端
后面创立的Service只能在集群外部网络中拜访,那么怎么让客户端来拜访Service呢?
三种形式
NodePort
- 每个节点都凋谢一个端口
LoadBalance
- NodePort的一种扩大,负载均衡器须要云基础设施来提供
- Ingress
NodePort
在每个节点(包含master),都凋谢一个雷同的端口,能够通过任意节点的端口来拜访Service
cat <<EOF > kubia-svc-nodeport.ymlapiVersion: v1kind: Servicemetadata: name: kubia-nodeportspec: type: NodePort # 在每个节点上凋谢拜访端口 ports: - port: 80 # 集群外部拜访该服务的端口 targetPort: 8080 # 容器的端口 nodePort: 30123 # 内部拜访端口 selector: app: kubiaEOF
创立并查看 Service
k create -f kubia-svc-nodeport.ymlk get svc kubia-nodeport-----------------------------------------------------------------------------NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubia-nodeport NodePort 10.68.140.119 <none> 80:30123/TCP 14m
能够通过任意节点的30123
端口来拜访 Service
- http://192.168.64.191:30123
- http://192.168.64.192:30123
- http://192.168.64.193:30123
磁盘挂载到容器
卷
-
卷的类型:
- emptyDir: 简略的空目录
- hostPath: 工作节点中的磁盘门路
- gitRepo: 从git克隆的本地仓库
- nfs: nfs共享文件系统
创立蕴含两个容器的pod, 它们共享同一个卷
cat <<EOF > fortune-pod.ymlapiVersion: v1kind: Podmetadata: name: fortune labels: app: fortunespec: containers: - image: luksa/fortune # 镜像名 name: html-genrator # 容器名 imagePullPolicy: Never volumeMounts: - name: html # 卷名为 html mountPath: /var/htdocs # 容器中的挂载门路 - image: nginx:alpine # 第二个镜像名 name: web-server # 第二个容器名 imagePullPolicy: Never volumeMounts: - name: html # 雷同的卷 html mountPath: /usr/share/nginx/html # 在第二个容器中的挂载门路 readOnly: true # 设置为只读 ports: - containerPort: 80 protocol: TCP volumes: # 卷 - name: html # 为卷命名 emptyDir: {} # emptyDir类型的卷EOF
k create -f fortune-pod.ymlk get po
创立Service, 通过这个Service拜访pod的80端口
cat <<EOF > fortune-svc.ymlapiVersion: v1kind: Servicemetadata: name: fortunespec: type: NodePort ports: - port: 8088 targetPort: 80 nodePort: 38088 selector: app: fortuneEOF
k create -f fortune-svc.ymlk get svc# 用浏览器拜访 http://192.168.64.191:38088/
NFS 文件系统
在 master 节点 192.168.64.191 上创立 nfs 目录 /etc/nfs_data
,
并容许 1921.68.64 网段的主机共享拜访这个目录
# 创立文件夹mkdir /etc/nfs_data# 在exports文件夹中写入配置# no_root_squash: 服务器端应用root权限cat <<EOF > /etc/exports/etc/nfs_data 192.168.64.0/24(rw,async,no_root_squash)EOF
systemctl enable nfssystemctl enable rpcbindsystemctl start nfssystemctl start rpcbind
尝试在客户端主机上,例如192.168.64.192,挂载近程的nfs目录
# 新建挂载目录mkdir /etc/web_dir/# 在客户端, 挂载服务器的 nfs 目录mount -t nfs 192.168.64.191:/etc/nfs_data /etc/web_dir/
长久化存储
创立 PersistentVolume - 长久卷资源
cat <<EOF > mongodb-pv.ymlapiVersion: v1kind: PersistentVolumemetadata: name: mongodb-pvspec: capacity: storage: 1Gi # 定义长久卷大小 accessModes: - ReadWriteOnce # 只容许被一个客户端挂载为读写模式 - ReadOnlyMany # 能够被多个客户端挂载为只读模式 persistentVolumeReclaimPolicy: Retain # 当申明被开释,长久卷将被保留 nfs: # nfs近程目录定义 path: /etc/nfs_data server: 192.168.64.191EOF
# 创立长久卷k create -f mongodb-pv.yml# 查看长久卷k get pv----------------------------------------------------------------------------------------------------------NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmongodb-pv 1Gi RWO,ROX Retain Available 4s
长久卷申明
应用长久卷申明,使利用与底层存储技术解耦
cat <<EOF > mongodb-pvc.ymlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: mongodb-pvcspec: resources: requests: storage: 1Gi # 申请1GiB存储空间 accessModes: - ReadWriteOnce # 容许单个客户端读写 storageClassName: "" # 参考动静配置章节EOF
k create -f mongodb-pvc.ymlk get pvc-----------------------------------------------------------------------------------NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmongodb-pvc Bound mongodb-pv 1Gi RWO,ROX 3s
cat <<EOF > mongodb-pod-pvc.ymlapiVersion: v1kind: Podmetadata: name: mongodbspec: containers: - image: mongo name: mongodb imagePullPolicy: Never securityContext: runAsUser: 0 volumeMounts: - name: mongodb-data mountPath: /data/db ports: - containerPort: 27017 protocol: TCP volumes: - name: mongodb-data persistentVolumeClaim: claimName: mongodb-pvc # 援用之前创立的"长久卷申明"EOF
验证 pod 中加挂载了 nfs 近程目录作为长久卷
k create -f mongodb-pod-pvc.ymlk exec -it mongodb mongouse mystoredb.foo.insert({name:'foo'})db.foo.find()
查看在 nfs 近程目录中的文件
cd /etc/nfs_datals
配置启动参数
docker 的命令行参数
Dockerfile中定义命令和参数的指令
ENTRYPOINT
启动容器时,在容器内执行的命令CMD
对启动命令传递的参数
CMD
能够在docker run
命令中进行笼罩
例如:
......ENTRYPOINT ["java", "-jar", "/opt/sp05-eureka-0.0.1-SNAPSHOT.jar"]CMD ["--spring.profiles.active=eureka1"]
启动容器时,能够执行:
docker run <image>
或者启动容器时笼罩CMD
docker run <image> --spring.profiles.active=eureka2
k8s中笼罩docker的ENTRYPOINT
和CMD
command
能够笼罩ENTRYPOINT
args
能够笼罩CMD
在镜像luksa/fortune:args
中,设置了主动生成内容的间隔时间参数为10秒
......CMD ["10"]
能够通过k8s的args
来笼罩docker的CMD
cat <<EOF > fortune-pod-args.ymlapiVersion: v1kind: Podmetadata: name: fortune labels: app: fortunespec: containers: - image: luksa/fortune:args args: ["2"] # docker镜像中配置的CMD是10,这里用args把这个值笼罩成2 name: html-genrator imagePullPolicy: Never volumeMounts: - name: html mountPath: /var/htdocs - image: nginx:alpine name: web-server imagePullPolicy: Never volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true ports: - containerPort: 80 protocol: TCP volumes: - name: html emptyDir: {}EOF
k create -f fortune-pod-args.yml# 查看podk get po -o wide--------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESfortune 2/2 Running 0 34s 172.20.2.55 192.168.64.192 <none> <none>
反复地执行curl命令,拜访该pod,会看到数据每2秒刷新一次
留神要批改成你的pod的ip
curl http://172.20.2.55
环境变量
在镜像luksa/fortune:env
中通过环境变量INTERVAL
来指定内容生成的间隔时间
上面配置中,通过env
配置,在容器中设置了环境变量INTERVAL
的值
cat <<EOF > fortune-pod-env.ymlapiVersion: v1kind: Podmetadata: name: fortune labels: app: fortunespec: containers: - image: luksa/fortune:env env: # 设置环境变量 INTERVAL=5 - name: INTERVAL value: "5" name: html-genrator imagePullPolicy: Never volumeMounts: - name: html mountPath: /var/htdocs - image: nginx:alpine name: web-server imagePullPolicy: Never volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true ports: - containerPort: 80 protocol: TCP volumes: - name: html emptyDir: {}EOF
k delete po fortunek create -f fortune-pod-env.yml # 查看podk get po -o wide--------------------------------------------------------------------------------------------------------------NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESfortune 2/2 Running 0 8s 172.20.2.56 192.168.64.192 <none> <none># 进入podk exec -it fortune bash# 查看pod的环境变量env------------INTERVAL=5......# 从pod推出,回到宿主机exit
反复地执行curl命令,拜访该pod,会看到数据每5秒刷新一次
留神要批改成你的pod的ip
curl http://172.20.2.56
ConfigMap
通过ConfigMap资源,能够从pod中把环境变量配置分离出来,是环境变量配置与pod解耦
能够从命令行创立ConfigMap资源:
# 间接命令行创立k create configmap fortune-config --from-literal=sleep-interval=20
或者从部署文件创建ConfigMap:
# 或从文件创建cat <<EOF > fortune-config.ymlapiVersion: v1kind: ConfigMapmetadata: name: fortune-configdata: sleep-interval: "10"EOF
# 创立ConfigMapk create -f fortune-config.yml# 查看ConfigMap的配置k get cm fortune-config -o yaml
从ConfigMap获取配置数据,设置为pod的环境变量
cat <<EOF > fortune-pod-env-configmap.yml
apiVersion: v1
kind: Pod
metadata:
name: fortune
labels:
app: fortune
spec:
containers:
image: luksa/fortune:env
imagePullPolicy: Never
env:name: INTERVAL # 环境变量名
valueFrom:configMapKeyRef: # 环境变量的值从ConfigMap获取 name: fortune-config # 应用的ConfigMap名称 key: sleep-interval # 用指定的键从ConfigMap取数据
name: html-genrator
volumeMounts:- name: html
mountPath: /var/htdocs
image: nginx:alpine
imagePullPolicy: Never
name: web-server
volumeMounts:- name: html
mountPath: /usr/share/nginx/html
readOnly: true
ports:
- containerPort: 80
protocol: TCP
- name: html
volumes:
- name: html
emptyDir: {}
EOF
config-map–>env–>arg
配置环境变量后,能够在启动参数中应用环境变量
cat <<EOF > fortune-pod-args.ymlapiVersion: v1kind: Podmetadata: name: fortune labels: app: fortunespec: containers: - image: luksa/fortune:args imagePullPolicy: Never env: - name: INTERVAL valueFrom: configMapKeyRef: name: fortune-config key: sleep-interval args: ["$(INTERVAL)"] # 启动参数中应用环境变量 name: html-genrator volumeMounts: - name: html mountPath: /var/htdocs - image: nginx:alpine imagePullPolicy: Never name: web-server volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true ports: - containerPort: 80 protocol: TCP volumes: - name: html emptyDir: {}EOF
从磁盘文件创建 ConfigMap
先删除之前创立的ComfigMap
d delete cm fortune-config
创立一个文件夹,寄存配置文件
cd ~/mkdir configmap-filescd configmap
创立nginx的配置文件,启用对文本文件和xml文件的压缩
cat <<EOF > my-nginx-config.confserver { listen 80; server_name www.kubia-example.com; gzip on; gzip_types text/plain application/xml; location / { root /ur/share/nginx/html; index index.html index.htm; }}EOF
增加sleep-interval
文件,写入值25
cat <<EOF > sleep-interval25EOF
从configmap-files文件夹创立ConfigMap
cd ~/k create configmap fortune-config --from-file=configmap-files
Deployment
Deployment 是一种更高级的资源,用于部署或降级利用.
创立Deployment时,ReplicaSet资源会随之创立,理论Pod是由ReplicaSet创立和治理,而不是由Deployment间接治理
Deployment能够在利用滚动降级过程中, 引入另一个RepliaSet, 并协调两个ReplicaSet.
cat <<EOF > kubia-deployment-v1.ymlapiVersion: apps/v1kind: Deploymentmetadata: name: kubiaspec: replicas: 3 selector: matchLabels: app: kubia template: metadata: name: kubia labels: app: kubia spec: containers: - image: luksa/kubia:v1 imagePullPolicy: Never name: nodejsEOF
k create -f kubia-deployment-v1.yml --recordk get deploy-----------------------------------------------NAME READY UP-TO-DATE AVAILABLE AGEkubia 3/3 3 3 2m35sk get rs----------------------------------------------------NAME DESIRED CURRENT READY AGEkubia-66b4657d7b 3 3 3 3m4sk get po------------------------------------------------------------NAME READY STATUS RESTARTS AGEkubia-66b4657d7b-f9bn7 1/1 Running 0 3m12skubia-66b4657d7b-kzqwt 1/1 Running 0 3m12skubia-66b4657d7b-zm4xd 1/1 Running 0 3m12sk rollout status deploy kubia------------------------------------------deployment "kubia" successfully rolled out
rs 和 pod 名称中的数字,是 pod 模板的哈希值
降级 Deployment
只须要在 pod 模板中批改镜像的 Tag, Deployment 就能够主动实现降级过程
Deployment的降级策略
- 滚动降级 Rolling Update - 渐进的删除旧的pod, 同时创立新的pod, 这是默认的降级策略
- 重建 Recreate - 一次删除所有旧的pod, 再从新创立新的pod
minReadySeconds
设置为10秒, 减慢滚动降级速度, 便于咱们察看降级的过程.
k patch deploy kubia -p '{"spec": {"minReadySeconds": 10}}'
触发滚动降级
批改 Deployment 中 pod 模板应用的镜像就能够触发滚动降级
为了便于察看, 在另一个终端中执行循环, 通过 service 来拜访pod
while true; do curl http://192.168.64.191:30123; sleep 0.5s; done
k set image deploy kubia nodejs=luksa/kubia:v2
通过不同的命令来理解降级的过程和原理
k rollout status deploy kubiak get rsk get po --show-labelsk describe rs kubia-66b4657d7b
回滚 Deployment
luksa/kubia:v3 镜像中的利用模仿一个 bug, 从第5次申请开始, 会呈现 500 谬误
k set image deploy kubia nodejs=luksa/kubia:v3
手动回滚到上一个版本
k rollout undo deploy kubia
管制滚动降级速率
滚动降级时
- 先创立新版本pod
- 再销毁旧版本pod
能够通过参数来管制, 每次新建的pod数量和销毁的pod数量:
maxSurge
- 默认25%
容许超出的 pod 数量.
如果冀望pod数量是4, 滚动降级期间, 最多只容许理论有5个 pod.maxUnavailable
- 默认 25%
容许有多少 pod 处于不可用状态.
如果冀望pod数量是4, 滚动降级期间, 最多只容许 1 个 pod 不可用, 也就是说任何工夫都要放弃至多有 3 个可用的pod.
查看参数
k get deploy -o yaml--------------------------------...... strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate......
暂停滚动降级
将 image 降级到 v4 版本触发更新, 并立刻暂停更新.
这时会有一个新版本的 pod 启动, 能够暂停更新过程, 让大量用户能够拜访到新版本, 并察看其运行是否失常.
依据新版本的运行状况, 能够持续实现更新, 或回滚到旧版本.
k set image deploy kubia nodejs=luksa/kubia:v4# 暂停k rollout pause deploy kubia# 持续k rollout resume deploy kubia
主动阻止出错版本升级
minReadySeconds
- 新创建的pod胜利运行多久后才,持续降级过程
- 在该时间段内, 如果容器的就绪探针返回失败, 降级过程将被阻止
批改Deployment配置,增加就绪探针
cat <<EOF > kubia-deployment-v3-with-readinesscheck.ymlapiVersion: apps/v1kind: Deploymentmetadata: name: kubiaspec: replicas: 3 selector: matchLabels: app: kubia minReadySeconds: 10 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: name: kubia labels: app: kubia spec: containers: - image: luksa/kubia:v3 name: nodejs imagePullPolicy: Never readinessProbe: periodSeconds: 1 httpGet: path: / port: 8080EOF
k apply -f kubia-deployment-v3-with-readinesscheck.yml
就绪探针探测距离设置成了 1 秒, 第5次申请开始每次申请都返回500错, 容器会处于未就绪状态. minReadySeconds
被设置成了10秒, 只有pod就绪10秒后, 降级过程才会持续.所以这是滚动降级过程会被阻塞, 不会持续进行.
默认降级过程被阻塞10分钟后, 降级过程会被视为失败, Deployment形容中会显示超时(ProgressDeadlineExceeded).
k describe deploy kubia-----------------------------......Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing False ProgressDeadlineExceeded......
这是只能通过手动执行 rollout undo 命令进行回滚
k rollout undo deploy kubia
Dashboard 仪表盘
查看 Dashboard 部署信息
# 查看podk get pod -n kube-system | grep dashboard-------------------------------------------------------------------------------kubernetes-dashboard-5c7687cf8-s2f9z 1/1 Running 0 10d# 查看servicek get svc -n kube-system | grep dashboard------------------------------------------------------------------------------------------------------kubernetes-dashboard NodePort 10.68.239.141 <none> 443:20176/TCP 10d# 查看集群信息k cluster-info | grep dashboard-------------------------------------------------------------------------------------------------------------------------------------------------------------kubernetes-dashboard is running at https://192.168.64.191:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
依据下面信息能够看到 dashboard 的拜访地址:
https://192.168.64.191:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
当初拜访 dashboard 因为平安设置的起因无法访问
证书验证拜访
应用集群CA 生成客户端证书,该证书领有所有权限
cd /etc/kubernetes/ssl# 导出证书文件openssl pkcs12 -export -in admin.pem -inkey admin-key.pem -out kube-admin.p12
下载 /etc/kubernetes/ssl/kube-admin.p12
证书文件, 在浏览器中导入:
拜访 dashboard 会提醒登录, 这里咱们用令牌的形式拜访 (https://192.168.64.191:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy)
令牌
# 创立Service Account 和 ClusterRoleBindingk apply -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yaml# 获取 Bearer Token,复制输入中 ‘token:’ 结尾那一行k -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')