ConfigMap 简介

ConfigMap 是 k8s 中的一种 API 对象,用于镜像和配置文件解耦(对标非 k8s 环境,咱们常常用配置管理核心解耦代码和配置,其实是一个意思),这样镜像就具备了可移植性和可复用性。Pods 能够将其用作环境变量、命令行参数或者存储卷中的配置文件。在生产环境中,它作为环境变量配置的应用十分常见。

跟它相似的,还有另一个 API 对象 Secret 。

二者的区别是,前者用于存储不敏感和非保密性数据。例如 ip 和端口。后者用于存储敏感和保密性数据,例如用户名和明码,秘钥,等等,应用 base64 编码贮存。

对于 configmap 的更多内容能够看看官网:
https://kubernetes.io/zh-cn/d...

应用 ConfigMap 的限度条件

  • ConfigMap 要在 Pod 启动前创立好。因为它是要被 Pod 应用的嘛。
  • 只有当 ConfigMap 和 Pod 处于同一个 NameSpace 时 Pod 才能够援用它。
  • 当 Pod 对 ConfigMap 进行挂载(VolumeMount)操作时,在容器外部只能挂载为目录,不能挂载为文件。
  • 当挂载曾经存在的目录时,且目录内含有其它文件,ConfigMap 会将其笼罩掉。

实操

本次操作,最后的 yaml 配置如下,总共蕴含三个局部:

  • ConfigMap
  • Deployment
  • Service

也能够将这三个局部拆分到 3 个 yaml 文件中别离执行。

apiVersion: v1kind: ConfigMapmetadata:  name: nginx-confdata:  nginx.conf: |    user nginx;    worker_processes  2;    error_log  /var/log/nginx/error.log;    events {      worker_connections  1024;    }    http {      include       mime.types;      #sendfile        on;      keepalive_timeout  1800;      log_format  main              'remote_addr:$remote_addr '              'time_local:$time_local   '              'method:$request_method   '              'uri:$request_uri '              'host:$host       '              'status:$status   '              'bytes_sent:$body_bytes_sent      '              'referer:$http_referer    '              'useragent:$http_user_agent       '              'forwardedfor:$http_x_forwarded_for       '              'request_time:$request_time';      access_log        /var/log/nginx/access.log main;      server {          listen       80;          server_name  localhost;          location / {              root   html;              index  index.html index.htm;          }          error_page   500 502 503 504  /50x.html;      }      include /etc/nginx/conf.d/*.conf;    }  virtualhost.conf: |    upstream app {      server localhost:8080;      keepalive 1024;    }    server {      listen 80 default_server;      root /usr/local/app;      access_log /var/log/nginx/app.access_log main;      error_log  /var/log/nginx/app.error_log;      location / {        proxy_pass http://app/;        proxy_http_version 1.1;      }    }---apiVersion: apps/v1kind: Deploymentmetadata:  name: my-demo-nginxspec:  replicas: 1  selector:    matchLabels:      app: my-demo-nginx  template:    metadata:      labels:        app: my-demo-nginx    spec:      containers:      - name: my-demo-nginx        imagePullPolicy: IfNotPresent        ports:        - containerPort: 80        volumeMounts:        - mountPath: /etc/nginx/nginx.conf # mount nginx-conf volumn to /etc/nginx          #readOnly: true          #name: nginx-conf          #name: my-demo-nginx          name: nginx          subPath: nginx.conf        - mountPath: /var/log/nginx          name: log      volumes:      - name: nginx        configMap:          name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx          items:            - key: nginx.conf              path: nginx.conf            - key: virtualhost.conf              path: conf.d/virtualhost.conf # dig directory      - name: log        emptyDir: {}---apiVersion: v1kind: Servicemetadata:  name: nginx-service #定义service名称为nginx-service  labels:    app: nginx-service #为service打上app标签spec:  type: NodePort #应用NodePort形式开明,在每个Node上调配一个端口作为内部拜访入口  #type: LoadBalancer #工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack  #type: ClusterIP #默认,调配一个集群外部能够拜访的虚构IP(VIP)  ports:  - port: 8000 #port是k8s集群外部拜访service的端口,即通过clusterIP: port能够拜访到某个service    targetPort: 80 #targetPort是pod的端口,从port和nodePort来的流量通过kube-proxy流入到后端pod的targetPort上,最初进入容器    nodePort: 32500 #nodePort是内部拜访k8s集群中service的端口,通过nodeIP: nodePort能够从内部拜访到某个service  selector:    app: my-nginx

执行该 yaml 文件,遇到了问题:
本模板&此实操中 Deployment 的配置,它的 spec.template.spec.containers.volumeMounts.name 的值应用 nginx 能力胜利,如果是 my-demo-nginx 则报错如下:

[root@k8s-master k8s-install]# kubectl create -f configmap-nginx.yamlconfigmap/nginx-conf createdservice/nginx-service createdThe Deployment "my-demo-nginx" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "my-demo-nginx"

如果是 nginx-conf 则报错如下:

[root@k8s-master k8s-install]# kubectl apply -f configmap-nginx.yamlWarning: resource configmaps/nginx-conf is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.configmap/nginx-conf configuredWarning: resource services/nginx-service is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.service/nginx-service configuredThe Deployment "my-demo-nginx" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nginx-conf"

当改为 nginx 后才失常:

[root@k8s-master k8s-install]# kubectl apply -f configmap-nginx.yamlconfigmap/nginx-conf unchangeddeployment.apps/my-demo-nginx createdservice/nginx-service unchanged

该报错的起因是 spec.template.spec.volumes.name 的值为 nginx,二者需保持一致(编辑和应用 yaml 比拟蛋疼的中央也在于此:字段越多,坑越多。官网没有具体的阐明文档间接通知你每个字段的意义,以及哪里跟哪里应该保持一致,哪里须要留神什么,等等。很多货色须要你通过实际本人发现。所以 yaml 的最佳实际也不是很容易总结)。

查看 ConfigMap、Deployment、Service、Pod 是否创立胜利:

[root@k8s-master k8s-install]# kubectl get cm nginx-confNAME         DATA   AGEnginx-conf   2      12m[root@k8s-master k8s-install]# kubectl get deployNAME            READY   UP-TO-DATE   AVAILABLE   AGEmy-demo-nginx   0/2     2            0           7m29s[root@k8s-master k8s-install]# kubectl get svc nginx-serviceNAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGEnginx-service   NodePort   10.103.221.135   <none>        8000:32500/TCP   10m[root@k8s-master k8s-install]# kubectl get po -A | grep nginxdefault       my-demo-nginx-7bff4cd4dd-hs4bx               0/1     CrashLoopBackOff   6 (4m18s ago)    10mdefault       my-demo-nginx-7bff4cd4dd-tdjqd               0/1     CrashLoopBackOff   6 (4m11s ago)    10m

此时遇到了第二个问题:
查看 pod 状态发现始终是 CrashLoopBackOff 状态,阐明 pod 没有失常运行。
查看 pod 和容器的日志,均为空。
应用 describe 查看 pod 事件,只有一条无价值的信息:Back-off restarting failed container。删除方才部署的所有重复从新创立也不行。

[root@k8s-master k8s-install]# kubectl describe po my-demo-nginx-9c5b6cc8c-5bws7Name:         my-demo-nginx-9c5b6cc8c-5bws7Namespace:    defaultPriority:     0Node:         k8s-slave2/192.168.100.22Start Time:   Tue, 14 Jun 2022 00:13:44 +0800Labels:       app=my-demo-nginx              pod-template-hash=9c5b6cc8cAnnotations:  <none>Status:       RunningIP:           10.244.1.25IPs:  IP:           10.244.1.25Controlled By:  ReplicaSet/my-demo-nginx-9c5b6cc8cContainers:  my-demo-nginx:    Container ID:   docker://cd6c27ee399cce85adf64465dce43e7b361c92f5f85b46944a2749068a111a5e    Image:          192.168.100.20:8888/my-demo/nginx    Image ID:       docker-pullable://192.168.100.20:5000/mynginx@sha256:a12ca72a7db4dfdd981003da98491c8019d2de1da53c1bff4f1b378da5b1ca32    Port:           80/TCP    Host Port:      0/TCP    State:          Terminated      Reason:       Completed      Exit Code:    0      Started:      Tue, 14 Jun 2022 00:14:22 +0800      Finished:     Tue, 14 Jun 2022 00:14:22 +0800    Last State:     Terminated      Reason:       Completed      Exit Code:    0      Started:      Tue, 14 Jun 2022 00:13:59 +0800      Finished:     Tue, 14 Jun 2022 00:13:59 +0800    Ready:          False    Restart Count:  3    Environment:    <none>    Mounts:      /etc/nginx/nginx.conf from nginx (rw,path="nginx.conf")      /var/log/nginx from log (rw)      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxbxk (ro)Conditions:  Type              Status  Initialized       True   Ready             False   ContainersReady   False   PodScheduled      True Volumes:  nginx:    Type:      ConfigMap (a volume populated by a ConfigMap)    Name:      nginx-conf    Optional:  false  log:    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)    Medium:         SizeLimit:  <unset>  kube-api-access-nxbxk:    Type:                    Projected (a volume that contains injected data from multiple sources)    TokenExpirationSeconds:  3607    ConfigMapName:           kube-root-ca.crt    ConfigMapOptional:       <nil>    DownwardAPI:             trueQoS Class:                   BestEffortNode-Selectors:              <none>Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents:  Type     Reason     Age               From               Message  ----     ------     ----              ----               -------  Normal   Scheduled  39s               default-scheduler  Successfully assigned default/my-demo-nginx-9c5b6cc8c-5bws7 to k8s-slave2  Normal   Pulled     38s               kubelet            Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 135.495946ms  Normal   Pulled     37s               kubelet            Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 68.384569ms  Normal   Pulled     24s               kubelet            Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 81.06582ms  Normal   Created    1s (x4 over 38s)  kubelet            Created container my-demo-nginx  Normal   Started    1s (x4 over 38s)  kubelet            Started container my-demo-nginx  Normal   Pulling    1s (x4 over 38s)  kubelet            Pulling image "192.168.100.20:8888/my-demo/nginx"  Normal   Pulled     1s                kubelet            Successfully pulled image "192.168.100.20:8888/my-demo/nginx" in 178.317289ms  Warning  BackOff    0s (x5 over 36s)  kubelet            Back-off restarting failed container

解决办法:
在 spec.template.spec.containers.image 这一行上面减少如下 3 行配置:别离是 command、args、imagePullPolicy 这个 3 个参数:

spec:  containers:  - name: my-demo-nginx    image: 192.168.100.20:8888/my-demo/nginx    command: ["/bin/bash", "-c", "--"]    args: ["while true; do sleep 30; done;"]    imagePullPolicy: IfNotPresent

重新部署,pod 终于失常运行:

[root@k8s-master k8s-install]# kubectl create -f configmap-nginx.yamlconfigmap/nginx-conf createddeployment.apps/my-demo-nginx createdservice/nginx-service created[root@k8s-master k8s-install]# [root@k8s-master k8s-install]# kubectl get poNAME                             READY   STATUS    RESTARTS   AGEmy-demo-nginx-6849476467-7bml4   1/1     Running   0          10s

起因:
本次导致 pod 始终没能失常运行的起因是援用的 nginx 镜像本身存在问题,容器中的 nginx 过程没能失常启动,所以容器和 pod 都无奈失常运行。所以减少以上参数后,应用 /bin/bash 代替 nginx 作为根过程,容器就能失常运行了。这是容器运行的设定决定的。不懂的同学再温习一下容器的内容。
晓得了问题的起因,就能晓得本次实操中起初追加的三个参数,其实外围参数只有一个,就是 command,另外两个参数可有可无。所以也能够只减少一个参数即可,例如:command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
如果是援用的 nginx 镜像没问题的话,就不会遇到第二个问题。所以这里要顺便分享下本次的解决思路:当 pod 状态为 CrashLoopBackOff,并且 pod 和容器的日志都为空,能够思考是援用的容器镜像出了问题。以本例来说,镜像启动容器后,本来应该运行的 nginx 过程没有启动。加个 command 参数很容易就能验进去这个问题。

如下所示,当 pod 和容器失常启动后(通过减少 command 参数),登录容器后发现,根过程是 /bin/bash,而不是 nginx,从而验证了后面的判断:

[root@k8s-master ~]# kubectl exec my-demo-nginx-6849476467-7bml4 -c my-demo-nginx -it -- /bin/bash[root@my-demo-nginx-6849476467-7bml4 /]#[root@my-demo-nginx-6849476467-7bml4 /]# ps auxUSER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMANDroot          1  0.0  0.0  11904  1512 ?        Ss   Jun14   0:00 /bin/bash -c -- while true; do sleep 30; done;root       1463  0.0  0.1  12036  2096 pts/0    Ss   03:34   0:00 /bin/bashroot       1479  0.0  0.0  23032   932 ?        S    03:34   0:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 30root       1480  0.0  0.0  44652  1772 pts/0    R+   03:34   0:00 ps aux

最终残缺的配置如下:

apiVersion: v1kind: ConfigMapmetadata:  name: nginx-confdata:  nginx.conf: |    user nginx;    worker_processes  2;    error_log  /var/log/nginx/error.log;    events {      worker_connections  1024;    }    http {      include       mime.types;      #sendfile        on;      keepalive_timeout  1800;      log_format  main              'remote_addr:$remote_addr '              'time_local:$time_local   '              'method:$request_method   '              'uri:$request_uri '              'host:$host       '              'status:$status   '              'bytes_sent:$body_bytes_sent      '              'referer:$http_referer    '              'useragent:$http_user_agent       '              'forwardedfor:$http_x_forwarded_for       '              'request_time:$request_time';      access_log        /var/log/nginx/access.log main;      server {          listen       80;          server_name  localhost;          location / {              root   html;              index  index.html index.htm;          }          error_page   500 502 503 504  /50x.html;      }      include /etc/nginx/conf.d/*.conf;    }  virtualhost.conf: |    upstream app {      server localhost:8080;      keepalive 1024;    }    server {      listen 80 default_server;      root /usr/local/app;      access_log /var/log/nginx/app.access_log main;      error_log  /var/log/nginx/app.error_log;      location / {        proxy_pass http://app/;        proxy_http_version 1.1;      }    }---apiVersion: apps/v1kind: Deploymentmetadata:  name: my-demo-nginxspec:  replicas: 1  selector:    matchLabels:      app: my-demo-nginx  template:    metadata:      labels:        app: my-demo-nginx    spec:      containers:      - name: my-demo-nginx        image: 192.168.100.20:8888/my-demo/nginx        command: ["/bin/bash", "-c", "--"]        args: ["while true; do sleep 30; done;"]        imagePullPolicy: IfNotPresent        ports:        - containerPort: 80        volumeMounts:        - mountPath: /etc/nginx/nginx.conf # mount nginx-conf volumn to /etc/nginx          #readOnly: true          #name: nginx-conf          #name: my-demo-nginx          name: nginx          subPath: nginx.conf        - mountPath: /var/log/nginx          name: log      volumes:      - name: nginx        configMap:          name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx          items:            - key: nginx.conf              path: nginx.conf            - key: virtualhost.conf              path: conf.d/virtualhost.conf # dig directory      - name: log        emptyDir: {}---apiVersion: v1kind: Servicemetadata:  name: nginx-service #定义service名称为nginx-service  labels:    app: nginx-service #为service打上app标签spec:  type: NodePort #应用NodePort形式开明,在每个Node上调配一个端口作为内部拜访入口  #type: LoadBalancer #工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack  #type: ClusterIP #默认,调配一个集群外部能够拜访的虚构IP(VIP)  ports:  - port: 8000 #port是k8s集群外部拜访service的端口,即通过clusterIP: port能够拜访到某个service    targetPort: 80 #targetPort是pod的端口,从port和nodePort来的流量通过kube-proxy流入到后端pod的targetPort上,最初进入容器    nodePort: 32500 #nodePort是内部拜访k8s集群中service的端口,通过nodeIP: nodePort能够从内部拜访到某个service  selector:    app: my-nginx