DaemonSet简介
DaemonSet 确保全副(或者一些)Node 上运行一个 Pod 的正本。当有 Node 退出集群时,也会为他们新增一个Pod 。当有 Node 从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创立的所有 Pod
- 应用 DaemonSet 的一些典型用法:
- 运行集群存储 daemon,例如在每个 Node 上运行 glusterd 、 ceph
- 在每个 Node 上运行日志收集 daemon,例如 fluentd 、 logstash
- 在每个 Node 上运行监控 daemon,例如 Prometheus Node Exporter、 collectd 、Datadog 代理、New Relic 代理,或 Ganglia gmond
默认会在每个节点运行一个Pod (master节点除外)或依据标签匹配抉择运行的节点
DaemonSet 配置标准
apiVersion: apps/v1 # API群组及版本kind: DaemonSet#资源类型特有标识metadata: name <string> #资源名称,在作用域中要惟一 namespace <string> #名称空间;DaemonSet资源附属名称空间级别spec: minReadySeconds <integer> # Pod就绪后多少秒内任一容器无crash方可视为“就绪” selector <object> #标签选择器,必须匹配template字段中Pod模板中的标签 template <object> #Pod模板对象; revisionHistoryLimit <integer> #滚动更新历史记录数量,默认为10; updateStrategy <0bject> #滚动更新策略 type <string> #滚动更新类型,可用值有OnDelete和Rollingupdate; rollingUpdate <Object> #滚动更新参数,专用于RollingUpdate类型 maxUnavailable <string> #更新期间可比冀望的Pod数量短少的数量或比例
示例1: 新建DaemonSet控制器 部署node-exporter
[root@k8s-master PodControl]# cat daemonset-demo.yaml apiVersion: apps/v1kind: DaemonSetmetadata: name: daemonset-demo namespace: default labels: app: prometheus component: node-exporterspec: selector: matchLabels: app: prometheus component: node-exporter template: metadata: name: prometheus-node-exporter labels: app: prometheus component: node-exporter spec: containers: - image: prom/node-exporter:v0.18.0 name: prometheus-node-exporter ports: - name: prom-node-exp containerPort: 9100 hostPort: 9100 livenessProbe: tcpSocket : port: prom-node-exp initialDelaySeconds: 3 readinessProbe: httpGet: path: '/metrics' port: prom-node-exp scheme: HTTP initialDelaySeconds: 5 hostNetwork: true hostPID: true[root@k8s-master PodControl]# kubectl apply -f daemonset-demo.yaml daemonset.apps/daemonset-demo created[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdeployment-demo-77d46c4794-fhz4l 1/1 Running 0 16hdeployment-demo-77d46c4794-kmrhn 1/1 Running 0 16hdeployment-demo-fb544c5d8-5f9lp 1/1 Running 0 17h[root@k8s-master PodControl]# cat daemonset-demo.yaml ... spec: containers: - image: prom/node-exporter:latest #更新为最新版...#默认为滚动更新[root@k8s-master PodControl]# kubectl apply -f daemonset-demo.yaml && kubectl rollout status daemonset/daemonset-demodaemonset.apps/daemonset-demo createdWaiting for daemon set "daemonset-demo" rollout to finish: 0 of 3 updated pods are available...Waiting for daemon set "daemonset-demo" rollout to finish: 1 of 3 updated pods are available...Waiting for daemon set "daemonset-demo" rollout to finish: 2 of 3 updated pods are available...daemon set "daemonset-demo" successfully rolled out
Job的简介及 配置标准
Job 负责批处理工作,即仅执行一次的工作,它保障批处理工作的一个或多个 Pod 胜利完结
Job 配置标准
apiVersion: batch/v1 #API群组及版本kind: Job #资源类型特有标识metadata: name <string> #资源名称,在作用域中要惟一 namespace <string> #名称空间;Job资源附属名称空间级别spec: selector <object> #标签选择器,必须匹配template字段中Pod模板中的标签 template <object> #Pod模板对象 completions <integer> #冀望的胜利实现的作业次数,胜利运行完结的Pod数量 ttlSecondsAfterFinished <integer> #终止状态作业的生存时长,超期将被删除 parallelism <integer> #作业的最大并行度,默认为1 backoffLimit <integer> #将作业标记为Failed之前的重试次数,默认为6 activeDeadlineSeconds <integer> #作业启动后可处于活动状态的时长 restartPolicy <string> #重启策略 #1. Always: 容器生效时,kubelet 主动重启该容器; #2. OnFailure: 容器终止运行且退出码不为0时重启; #3. Never: 不管状态为何, kubelet 都不重启该容器。
示例2: 创立Job控制器 实现2次工作
[root@k8s-master PodControl]# cat job-demo.yaml apiVersion: batch/v1kind: Jobmetadata: name: job-demospec: template: spec: containers: - name: myjob image: alpine:3.11 imagePullPolicy: IfNotPresent command: ["/bin/sh" , "-c", "sleep 60"] restartPolicy: Never completions: 2 #实现2次 没有配置并行 所以是单队列 实现1次后在启动一次 ttlSecondsAfterFinished: 3600 #保留1个小时 backoffLimit: 3 #重试次数 默认为6改为3 activeDeadlineSeconds: 300 #启动后的存活时长[root@k8s-master PodControl]# kubectl apply -f job-demo.yaml [root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdaemonset-demo-4zfwp 1/1 Running 0 20mdaemonset-demo-j7m7k 1/1 Running 0 20mdaemonset-demo-xj6wc 1/1 Running 0 20mjob-demo-w4nkh 0/1 ContainerCreating 0 2s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdaemonset-demo-4zfwp 1/1 Running 0 22mdaemonset-demo-j7m7k 1/1 Running 0 22mdaemonset-demo-xj6wc 1/1 Running 0 22mjob-demo-vfh9r 1/1 Running 0 49s #串行运行 job-demo-w4nkh 0/1 Completed 0 2m5s #已实现[root@k8s-master PodControl]# cat job-para-demo.yaml apiVersion: batch/v1kind: Jobmetadata: name: job-para-demospec: template: spec: containers: - name: myjob image: alpine:3.11 imagePullPolicy: IfNotPresent command: ["/bin/sh" , "-c", "sleep 60"] restartPolicy: Never completions: 12 #实现12次 parallelism: 2 #同时运行2个 个2个Pod同时运行 ttlSecondsAfterFinished: 3600 backoffLimit: 3 activeDeadlineSeconds: 1200[root@k8s-master PodControl]# kubectl apply -f job-para-demo.yaml job.batch/job-para-demo created[root@k8s-master PodControl]# kubectl get jobNAME COMPLETIONS DURATION AGEjob-demo 2/2 2m25s 11mjob-para-demo 10/12 6m37s 7s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdaemonset-demo-4zfwp 1/1 Running 0 25mdaemonset-demo-j7m7k 1/1 Running 0 25mdaemonset-demo-xj6wc 1/1 Running 0 25mdeployment-demo-fb544c5d8-lj5gt 0/1 Terminating 0 17hdeployment-demo-with-strategy-59468cb976-vkxdg 0/1 Terminating 0 16hjob-demo-vfh9r 0/1 Completed 0 3m41sjob-demo-w4nkh 0/1 Completed 0 4m57sjob-para-demo-9jtnv 0/1 ContainerCreating 0 7s #同一时间并行数为2个 共循环6次job-para-demo-q2h6g 0/1 ContainerCreating 0 7s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdaemonset-demo-4zfwp 1/1 Running 0 30mdaemonset-demo-j7m7k 1/1 Running 0 30mdaemonset-demo-xj6wc 1/1 Running 0 30mjob-demo-vfh9r 0/1 Completed 0 9m38sjob-demo-w4nkh 0/1 Completed 0 10mjob-para-demo-8fz78 0/1 Completed 0 3m48sjob-para-demo-9jtnv 0/1 Completed 0 6m4sjob-para-demo-bnw47 0/1 Completed 0 2m42sjob-para-demo-dsmbm 0/1 Completed 0 96sjob-para-demo-j4zw5 1/1 Running 0 30sjob-para-demo-jkbw4 0/1 Completed 0 4m55sjob-para-demo-l9pwc 0/1 Completed 0 96sjob-para-demo-lxxrv 1/1 Running 0 30sjob-para-demo-nljhg 0/1 Completed 0 4m55sjob-para-demo-q2h6g 0/1 Completed 0 6m4sjob-para-demo-rc9qt 0/1 Completed 0 3m48sjob-para-demo-xnzsq 0/1 Completed 0 2m42s
cronJob简介及字段格局
Cron Job 治理基于工夫的 Job,即:
- 在给定工夫点只运行一次
- 周期性地在给定工夫点运行
**应用前提条件:以后应用的 Kubernetes 集群,版本 >= 1.8(对 CronJob)。对于先前版本的集群,版本 < 1.8,启动 API Server时,通过传递选项 --runtime-config=batch/v2alpha1=true 能够开启 batch/v2alpha1
API**
典型的用法如下所示:
1.在给定的工夫点调度 Job 运行
2.创立周期性运行的 Job,例如:数据库备份、发送邮件
cronJob 借助Job来实现工作 他们的有关系相似Deployment 与 ReplicaSet的关系
cronJob 配置标准
apiVersion: batch/v1betal #API群组及版本kind: CronJob #资源类型特有标识metadata: name <string> #资源名称,在作用域中要惟一 namespace <string> #名称空间; CronJob资源附属名称空间级别spec: jobTemplate <object> #job作业模板,必选字段 metadata <object> #模板元数据 spec_<object> #作业的冀望状态 schedule string> #调度工夫设定,必选字段 concurrencyPolicy <string> #并发策略,可用值有Allow、Fprbid和Replace 指前一个工作还没有执行完 下一个工作工夫又到了,是否容许两个工作同时运行 failedJobsHistoryLimit <integer> #失败作业的历史记录数,默认为1 successfulJobsHistoryLimit <integer> #胜利作业的历史记录数,默认为3 startingDeadlineSeconds <integer> #因错过工夫点而未执行的作业的可超期时长 suspend <boolean> #是否挂起后续的作业,不影响以后作业,默认为false
示例3 :创立 cronJob 每2分钟执行1次工作
[root@k8s-master PodControl]# cat cronjob-demo.yaml apiVersion: batch/v1beta1kind: CronJobmetadata: name: cronjob-demo namespace: defaultspec: schedule: "*/2 * * * *" #第2分种执行一次 和Linux定时工作一样别离是 分 时 日 月 周 jobTemplate: metadata: labels: controller: cronjob-demo spec: #Job的定义 parallelism: 1 #并行数为1 completions: 1 #执行1次 ttlSecondsAfterFinished: 600 #实现600后删除 backoffLimit: 3 #最多执行3次 activeDeadlineSeconds: 60 template: spec: containers: - name: myjob image: alpine command: - /bin/sh - -c - date; echo Hello from cronJob, sleep a while...; sleep 10 restartPolicy: OnFailure #容器终止运行且退出码不为0时重启 startingDeadlineSeconds: 300[root@k8s-master PodControl]# kubectl get cronjobNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGEcronjob-demo */2 * * * * False 0 <none> 58s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEcronjob-demo-1629169560-vn8hk 0/1 ContainerCreating 0 12sdaemonset-demo-4zfwp 1/1 Running 0 51mdaemonset-demo-j7m7k 1/1 Running 0 51mdaemonset-demo-xj6wc 1/1 Running 0 51m
StatefulSet 无状态利用控制器
StatefulSet 作为 Controller 为 Pod 提供惟一的标识。它能够保障部署和 scale 的程序
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其利用
场景包含:
- 稳固的长久化存储,即Pod从新调度后还是能拜访到雷同的长久化数据,基于PVC来实现
- 稳固的网络标记,即Pod从新调度后其PodName和HostName不变,基于Headless Service(即没 Cluster IP的Service)来实现
- 有序部署,有序扩大,即Pod是有程序的,在部署或者扩大的时候要根据定义的程序顺次顺次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现有序
- 膨胀,有序删除(即从N-1到0)
- StatefulSet:通用的有状态利用控制器
每个Pod都有本人的惟一标识,故障时,它只能被领有同一标识的新实例所取代;
${STATEFULSET_NAME}-${ORDINAL},web-O, web-1,web-2, 如果更新和删除都是从逆序从最初一个开始StatefulSet 强依赖于 Headless Service 无头服务创立Pod惟一标识 确定域名解析到能解析到惟一Pod之上如果有必要,能够为被Pod配置专用的存储卷,且只能是PVC格局;
StatefulSet 配置标准:
apiVersion:apps/v1 #API群组及版本;kind: StatefulSet #资源类型的特有标识metadata: name <string> #资源名称,在作用域中要惟一 namespace <string> #名称空间;StatefulSet附属名称空间级别spec: replicas <integer> #冀望的Pod正本数,默认为1 selector <object> #标签选择器,须匹配Pod模板中的标签,必选字段 template <object> #Pod模板对象,必选字段 revisionHistoryLimit <integer> #滚动更新历史记录数量,默认为10 updateStrategy'<object> #滚动更新策略 type <string> #滚动更新类型,可用值有OnDelete和Rollingupdate rollingUpdate <object> #滚动更新参数,专用于RollingUpdate类型 partition <integer> #分区批示索引值,默认为0 如果为2示意只更新序号大于2的Pod serviceName <string> #相干的Headless Service的名称,必选字段 volumeclaimTemplates <[]0bject> #存储卷申请模板 apiVersion <string> #PVC资源所属的API群组及版本,可省略 kind <string> #PVc资源类型标识,可省略 metadata <object>#卷申请模板元数据 spec <object>#冀望的状态,可用字段同PVC podManagementPolicy <string> # Pod管理策略,默认的“OrderedReady"示意程序创 # 建并逆序删除,另一可用值“Parallel"示意并行模式
示例4:创立StatefulSet控制器
[root@k8s-master PodControl]# cat statefulset-demo.yaml apiVersion: v1kind: Servicemetadata: name: demoapp-sts namespace: defaultspec: clusterIP: None #发明无头服务 ports: - port: 80 name: http selector: app: demoapp controller: sts-demo--- apiVersion: apps/v1kind: StatefulSetmetadata: name: sts-demospec: serviceName: demoapp-sts #绑定无头服务 replicas: 2 selector: matchLabels : app: demoapp controller: sts-demo template: #以下为Pod模板 metadata: labels: app: demoapp controller: sts-demo spec: containers: - name: demoapp image: ikubernetes/demoapp:v1.0 #不是无状态服务 只模仿StatefulSet创立过程 ports: - containerPort: 80 name: web volumeMounts: - name: appdata mountPath: /app/data volumeClaimTemplates: - metadata: name: appdata spec: accessModes: ["ReadWriteOnce"] #单路读写 storageClassName: "longhorn" #应用之前创立的sc resources: requests: storage: 2Gi[root@k8s-master PodControl]# kubectl apply -f statefulset-demo.yaml service/demoapp-sts created#Pod是按序创立 先发明实现第1个才会发明2个Pod[root@k8s-master PodControl]# kubectl get pod NAME READY STATUS RESTARTS AGEsts-demo-0 0/1 ContainerCreating 0 12s[root@k8s-master PodControl]# kubectl get pod NAME READY STATUS RESTARTS AGEsts-demo-0 1/1 Running 0 29ssts-demo-1 0/1 ContainerCreating 0 9s
示例4: StatefulSet 扩容、缩容
[root@k8s-master PodControl]# cat demodb.yaml # demodb ,an educational Kubernetes-native NoSQL data store. It is a distributed# key-value store,supporting permanent read and write operations.# Environment Variables: DEMODB_DATADIR,DEMODB_HOST,DEMODB_PORT# default port: 9907/tcp for clients, 9999/tcp for members.# Maintainter: MageEdu <mage@magedu. com>---apiVersion: v1kind: Servicemetadata: name: demodb namespace: default labels: app: demodbspec: clusterIP: None ports: - port: 9907 selector: app: demodb---apiVersion: apps/v1kind: StatefulSetmetadata: name: demodb namespace: defaultspec: selector: matchLabels: app: demodb serviceName: "demodb" replicas: 3 template: metadata: labels: app: demodb spec: nodeName: k8s-node3 #指定节点 测试环境node系统资源比拟 富余 containers : - name: demodb-shard image: ikubernetes/demodb:v0.1 ports: - containerPort: 9907 name: db env: - name: DEMODB_DATADIR value: "/demodb/data" volumeMounts: - name: data mountPath: /demodb/data volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] storageClassName: "longhorn" resources: requests: storage: 1Gi[root@k8s-master PodControl]# kubectl apply -f demodb.yaml[root@k8s-master PodControl]# kubectl get stsNAME READY AGEdemodb 1/3 31s[root@k8s-master PodControl]# kubectl get pod #按序创立NAME READY STATUS RESTARTS AGEcentos-deployment-66d8cd5f8b-9x47c 1/1 Running 1 22hdemodb-0 1/1 Running 0 33sdemodb-1 0/1 ContainerCreating 0 5s[root@k8s-master PodControl]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESdemodb-0 1/1 Running 0 5m22s 10.244.3.69 k8s-node3 <none> <none>demodb-1 1/1 Running 0 5m1s 10.244.3.70 k8s-node3 <none> <none>demodb-2 1/1 Running 0 4m37s 10.244.3.71 k8s-node3 <none> <none>[root@k8s-master PodControl]# kubectl get pvc # PVC 绑定胜利NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata-demodb-0 Bound pvc-bef2c61d-4ab4-4652-9902-f4bc2a918beb 1Gi RWO longhorn 14mdata-demodb-1 Bound pvc-2ebb5510-b9c1-4b7e-81d0-f540a1451eb3 1Gi RWO longhorn 9m8sdata-demodb-2 Bound pvc-74ee431f-794c-4b3d-837a-9337a6767f49 1Gi RWO longhorn 8m
- 新开启一个终端 测试应用
[root@k8s-master PodControl]# kubectl run pod-$RANDOM --image=ikubernetes/admin-box:latest -it --rm --command -- /bin/shroot@pod-31692 # nslookup -query=A demodb #解析测试Server: 10.96.0.10Address: 10.96.0.10#53Name: demodb.default.svc.cluster.localAddress: 10.244.3.69Name: demodb.default.svc.cluster.localAddress: 10.244.3.71Name: demodb.default.svc.cluster.localAddress: 10.244.3.70root@pod-31692 # nslookup -query=PTR 10.244.3.70 #对IP进行反解析Server: 10.96.0.10Address: 10.96.0.10#5370.3.244.10.in-addr.arpa name = demodb-1.demodb.default.svc.cluster.local. #解析对惟一Podroot@pod-31692 # nslookup -query=PTR 10.244.3.71 Server: 10.96.0.10Address: 10.96.0.10#5371.3.244.10.in-addr.arpa name = demodb-2.demodb.default.svc.cluster.local. #解析对惟一Podroot@pod-31692 # echo "www.google.com" >/tmp/dataroot@pod-31692 # curl -L -XPUT -T /tmp/data http://demodb:9907/set/data #通过svc上传数据 数据上传set 下载getWRITE completedrootroot@pod-31692 # curl http://demodb:9907/get/data #尽管位于不同的PV 不同Pod之前会同步数据 保持数据的一致性www.google.com
- 更新: 如果是更新镜你 能够用 set 命令 或 编译yaml文件 还能够用patch 打补丁
#编译yaml文件image版本为v0.2[root@k8s-master PodControl]# kubectl apply -f demodb.yamlservice/demodb unchangedstatefulset.apps/demodb configured[root@k8s-master PodControl]# kubectl get sts -o wideNAME READY AGE CONTAINERS IMAGESdemodb 2/3 18m demodb-shard ikubernetes/demodb:v0.2 #之前说过是逆序更新 demodb-2 最先更新[root@k8s-master PodControl]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScentos-deployment-66d8cd5f8b-9x47c 1/1 Running 1 22h 10.244.1.160 k8s-node1 <none> <none>demodb-0 1/1 Running 0 18m 10.244.3.69 k8s-node3 <none> <none>demodb-1 1/1 Running 0 18m 10.244.3.70 k8s-node3 <none> <none>demodb-2 0/1 ContainerCreating 0 16s <none> k8s-node3 <none> <none>
- 扩缩容 Pod 也是逆序执行
#应用patch 打补丁批改正本数[root@k8s-master PodControl]# kubectl patch sts/demodb -p '{"spec":{"replicas":5}}' #扩容数量为5statefulset.apps/demodb patched[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdemodb-0 1/1 Running 0 6m43sdemodb-1 1/1 Running 0 7m3sdemodb-2 1/1 Running 0 7m53sdemodb-3 0/1 ContainerCreating 0 10s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdemodb-0 1/1 Running 0 6m52sdemodb-1 1/1 Running 0 7m12sdemodb-2 1/1 Running 0 8m2sdemodb-3 0/1 ContainerCreating 0 19s[root@k8s-master PodControl]# kubectl patch sts/demodb -p '{"spec":{"replicas":2}}' #缩容数据为2 逆序缩[root@k8s-master PodControl]# kubectl get pod NAME READY STATUS RESTARTS AGEdemodb-0 1/1 Running 0 9m19sdemodb-1 1/1 Running 0 6m9sdemodb-2 1/1 Running 0 5m25sdemodb-3 1/1 Running 0 2m18sdemodb-4 0/1 Terminating 0 83s[root@k8s-master PodControl]# kubectl get podNAME READY STATUS RESTARTS AGEdemodb-0 1/1 Running 0 10mdemodb-1 1/1 Running 0 7m34sdemodb-2 1/1 Running 0 6m50sdemodb-3 1/1 Terminating 0 3m43s[root@k8s-master PodControl]# kubectl get pod NAME READY STATUS RESTARTS AGEdemodb-0 1/1 Running 0 12mdemodb-1 1/1 Running 0 9m46spod-31692 1/1 Running 0 85m#测试数据一致性 尽管不同位于的PV 不同Pod之前会同步数据 保持数据的一致性root@pod-31692 # curl http://demodb:9907/get/datawww.google.com