<Kubelet从入门到放弃>系列将对Kubelet组件由基础知识到源码进行深刻梳理。在这篇文章<Kubernetes与GPU齐飞>中zouyee会先介绍Nvidia系列GPU如何加持Kubernetes,后续介绍Device Plugin的相干概念以及Kubelet组件Device Manager的源码。
一、背景介绍
1.1 需要阐明
在Kubernetes 1.8之前,用户应用GPU等设施时,举荐应用Accelerators Feature Gate的内置形式,连续Kubernetes的插件化的实现理念,各司其职,在Kubernetes 1.10版本后,引入设施插件框架,用户能够将零碎硬件资源引入到Kubernetes生态。本文将介绍NVIDIA GPU如何装置部署,Device Plugins的相干介绍、工作机制和源码剖析,包含插件框架、应用和调度GPU、和异样解决及优化等相干内容。
1.2 相干技术
在Kubernetes 1.10中Device Plugins升为Beta个性,在Kubernetes 1.8时,为了给第三方厂商通过插件化的形式将设施资源接入到Kubernetes,给容器提供Extended Resources。通过Device Plugins形式,用户无须要改Kubernetes的代码,由第三方设施厂商开发插件,实现Kubernetes Device Plugins的相干接口即可(认真想想,Kubernetes中的volume治理是否也是相似的逻辑?CSI、CNI、CRI?)。目前Device Plugins典型实现有:a) AMD GPU插件b)Intel设施插件:GPU、FPGA和QuickAssist设施c)KubeVirt用于硬件辅助的虚拟化设施插件d)Nvidia提供的GPU插件e)高性能低提早RDMA卡插件f)低提早Solarflare万兆网卡驱动g)SR-IOV网络设备插件h)Xilinx FPGA设施插件 Device plugins启动时,对外裸露几个gRPC Service提供服务,并通过/var/lib/kubelet/device-plugins/kubelet.sock与Kubelet通信。
二、部署介绍
以后Nvidia GPU提供三种部署形式:docker形式、Containerd形式及Operator形式。因docker后续不再内置,相干阐明能够查看<对于Kubernetes废除内置docker CRI性能的阐明>,下文将次要介绍 Containerd部署,Operator形式后续独自成文,以后nvidia-container-toolkit曾经反对containerd和cri-o两种部署形式,在承受containerd部署前,先阐明后期遇到的相干问题:1)Error while dialing dial unix:///run/containerd/containerd.sock其中Kubelet问题形容:Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 10m default-scheduler Successfully assigned gpu-operator-resources/nvidia-device-plugin-daemonset-f99md to cl-gpu-md-0-f4gm6 Warning InspectFailed 10m (x3 over 10m) kubelet Failed to inspect image "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2": rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused"其中Nvidia Device Plugin Daemonset某一个Pod相干谬误,如下# kubectl logs ‐f nvidia‐device‐plugin‐daemonset‐q9svq ‐nkube‐system2021/02/11 01:32:29 Loading NVML2021/02/11 01:32:29 Failed to initialize NVML: could not load NVML library.2021/02/11 01:32:29 If this is a GPU node, did you set the docker default runtime to `nvidia`?2021/02/11 01:32:29 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites2021/02/11 01:32:29 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start2021/02/11 01:32:29 If this is not a GPU node, you should set up a toleration or nodeSelector to only deploy this plugin on GPU nodes该问题因为containerd的配置文件containerd.toml未将default_runtime_name = "runc"批改为default_runtime_name = "nvidia"相干问题:https://github.com/NVIDIA/gpu-operator/issues/1432)devices.allow: no such file or directory: unknown 相干问题:https://github.com/NVIDIA/libnvidia-container/issues/119 在kubelet配置的cgroup driver为systemd时,Nvidia的container prestart hook在解决cgroup门路逻辑与containerd不统一。containerd[76114]: time="2020-12-04T08:52:13.029072066Z" level=error msg="StartContainer for "7a1453c6e7ab8af7395ccc8dac5efcffa94a0834aa7b252e1dcd5b51f92bf13e" failed" error="failed to create containerd task: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: open failed: /sys/fs/cgroup/devices/system.slice/containerd.service/kubepods-pod80540e95304d8cece2ae2afafd8b8976.slice/devices.allow: no such file or directory: unknown"解决方案为降级libnvidia-container或者container-toolkit接下来,介绍部署相干内容。
2.1 Containerd
版本阐明
版本阐明 软件名称
CentOS 操作系统
4.19.25 内核版本
Tesla T4 GPU型号
418.39 driver版本
10.1 CUDA版本
1.18.5 K8S
v0.7.3 Nvidia Device plugin
v1.4.3 Containerd
1.0.0-rc1 runc
装置
注:下文为内网离线部署,若各位在联网环境下,只需参考部署步骤及部署配置即可
a. 装置驱动
$ tar ‐zxvf gpu.tar.gz ## 装置依赖 $ cd gpu/runtime $ tar ‐zxvf dependency.tar.gz $ cd dependency ## 查看是否反对CUDA的Nvidia的GPU $ cd ./lspci/ $ yum localinstall ‐y *.rpm $ lspci | grep ‐i nvidia ## 装置devel $ cd ../devel $ yum localinstall ‐y *.rpm ## 装置gcc $ cd ../gcc $ yum localinstall ‐y *.rpm ## 卸载nouveau驱动 $ lsmod | grep nouveau $ rmmod nouveau ## 装置驱动,过程见上面附的图片。如果要更新驱动,从https://developer.nvidia.com/cuda‐75‐downloads‐ archive下载 $ cd ../../../driver $ sh cuda_10.1.105_418.39_linux.run ## 测试驱动,有如下输入则失常装置执行命令验证后果$ nvidia‐smi
附:装置驱动图
(1) 输出accept,回车
(2) 抉择install,回车
b. 配置Containerd
## 更新runc,下载地址https://github.com/opencontainers/runc/releases $ cd ../runtime $ cp runc /usr/bin/ ## 更新containerd,下载地址 https://github.com/containerd/containerd/releases $ tar ‐zxvf containerd‐1.4.3‐linux‐amd64.tar.gz $ cp bin/* /usr/bin/ ## 装置nvidia‐container‐runtime,yum源https://nvidia.github.io/nvidia‐docker/centos7/nvidia‐ docker.repo,yum装置:yum install ‐y nvidia‐container‐runtime $ tar ‐zxvf nvidia‐container‐runtime.tar.gz $ cd nvidia‐container‐runtime $ yum localinstall ‐y *.rpm
批改改containerd启动参数
# 配置containerd的参数 $ mkdir /etc/containerd/ $ vi /etc/containerd/config.toml # 配置containerd.service $ vi /usr/lib/systemd/system/containerd.service $ systemctl daemon‐reload $ systemctl restart containerd # 配置crictl $ tar ‐zxvf crictl‐v1.18.0‐linux‐amd64.tar.gz $ mv crictl /usr/bin/ $ vi /etc/profile alias crictl='crictl ‐‐runtime‐endpoint unix:///run/containerd/containerd.sock' $ source /etc/profile # 测试containerd和nvidia‐container‐runtime装置是否胜利 $ cd test‐image $ ctr images import cuda‐vector‐add_v0.1.tar $ ctr images push ‐‐plain‐http registry.paas/cmss/cuda‐vector‐add:v0.1执行测验ctr run ‐t ‐‐gpus 0 registry.paas/cmss/cuda‐vector‐add:v0.1 cp nvidia‐smi后果如下:清理容器ctr c rm cp1)config.toml执行containerd config default > /etc/containerd/config.toml生成配置,并做如下批改: 留神:如上所述,1)default_runtime_name值为nvidia,2)新增一个runtimes 3)若有外部镜像仓库,可批改docker.io为外部仓库名称2)containerd.service[Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target[Service] ExecStartPre=‐/sbin/modprobe overlay ExecStart=/usr/bin/containerd KillMode=process Delegate=yes LimitNOFILE=1048576 # Having non‐zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container‐local accounting. LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity [Install] WantedBy=multi‐user.target
c. 部署Device Plugin
在部署完Kubernetes集群后,批改kubelet运行时配置:$ vi /apps/conf/kubernetes/kubelet.env ‐‐container‐runtime=remote ‐‐container‐runtime‐endpoint=unix:///run/containerd/containerd.sock$ cd device‐plugin $ docker load ‐i k8s‐device‐plugin_v0.7.3.tar $ docker push// https://github.com/NVIDIA/k8s-device-plugin/tree/master/deployments/static$ kubectl apply ‐f nvidia‐device‐plugin.yml $ kubectl logs ‐f nvidia‐device‐plugin‐daemonset‐q9svq ‐nkube‐system 2021/02/08 06:32:36 Loading NVML 2021/02/08 06:32:42 Starting FS watcher. 2021/02/08 06:32:42 Starting OS watcher. 2021/02/08 06:32:42 Retreiving plugins. 2021/02/08 06:32:42 Starting GRPC server for 'nvidia.com/gpu' 2021/02/08 06:32:42 Starting to serve 'nvidia.com/gpu' on /var/lib/kubelet/device‐ plugins/nvidia‐gpu.sock 2021/02/08 06:32:42 Registered device plugin for 'nvidia.com/gpu' with Kubelet
d. 功能测试
$ cd test‐image # 启动测试pod $ kubectl apply ‐f demo.yml// https://github.com/NVIDIA/gpu-operator/blob/master/tests/gpu-pod.yaml$ kubectl logs ‐f cuda‐vector‐add [Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done
后续相干内容,请查看公众号:DCOS
https://mp.weixin.qq.com/s/kl...