Kubernetes 是用于主动部署,扩大和治理容器化应用程序的开源零碎。本文将介绍如何疾速开始 K8s 的应用。
理解 K8s
- Kubernetes / Overview
搭建 K8s
本地开发测试,须要搭建一个 K8s 轻量服务。理论部署时,能够用云厂商的 K8s 服务。
本文以 k3d
为例,于 macOS 搭建 K8s 服务。于 Ubuntu 则举荐 MicroK8s
。其余可代替计划有:
Kubernetes / Install Tools
- kind, minikube, kubeadm
- Docker Desktop / Deploy on Kubernetes
k3d
k3s 是 Rancher 推出的 K8s 轻量版。而 k3d 即 k3s in docker,以 docker 容器治理 k3s 集群。
以下搭建过程,是于 macOS 的笔记,供参考。其余平台,请按照官网文档进行。
# 装置 kubectl: 命令行工具brew install kubectl# 装置 kubecm: 配置管理工具brew install kubecm# 装置 k3dbrew install k3d❯ k3d versionk3d version v4.4.8k3s version latest (default)
创立集群(1主2从):
❯ k3d cluster create mycluster --api-port 6550 --servers 1 --agents 2 --port 8080:80@loadbalancer --waitINFO[0000] Prep: NetworkINFO[0000] Created network 'k3d-mycluster' (23dc5761582b1a4b74d9aa64d8dca2256b5bc510c4580b3228123c26e93f456e)INFO[0000] Created volume 'k3d-mycluster-images'INFO[0001] Creating node 'k3d-mycluster-server-0'INFO[0001] Creating node 'k3d-mycluster-agent-0'INFO[0001] Creating node 'k3d-mycluster-agent-1'INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb'INFO[0001] Starting cluster 'mycluster'INFO[0001] Starting servers...INFO[0001] Starting Node 'k3d-mycluster-server-0'INFO[0009] Starting agents...INFO[0009] Starting Node 'k3d-mycluster-agent-0'INFO[0022] Starting Node 'k3d-mycluster-agent-1'INFO[0030] Starting helpers...INFO[0030] Starting Node 'k3d-mycluster-serverlb'INFO[0031] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy accessINFO[0036] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMapINFO[0036] Cluster 'mycluster' created successfully!INFO[0036] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=falseINFO[0036] You can now use it like this:kubectl config use-context k3d-myclusterkubectl cluster-info
查看集群信息:
❯ kubectl cluster-infoKubernetes control plane is running at https://0.0.0.0:6550CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyMetrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
查看资源信息:
# 查看 Nodes❯ kubectl get nodesNAME STATUS ROLES AGE VERSIONk3d-mycluster-agent-0 Ready <none> 2m12s v1.20.10+k3s1k3d-mycluster-server-0 Ready control-plane,master 2m23s v1.20.10+k3s1k3d-mycluster-agent-1 Ready <none> 2m4s v1.20.10+k3s1# 查看 Pods❯ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-6488c6fcc6-5n7d9 1/1 Running 0 2m12skube-system metrics-server-86cbb8457f-dr7lh 1/1 Running 0 2m12skube-system local-path-provisioner-5ff76fc89d-zbxf4 1/1 Running 0 2m12skube-system helm-install-traefik-bfm4c 0/1 Completed 0 2m12skube-system svclb-traefik-zx98g 2/2 Running 0 68skube-system svclb-traefik-7bx2r 2/2 Running 0 68skube-system svclb-traefik-cmdrm 2/2 Running 0 68skube-system traefik-6f9cbd9bd4-2mxhk 1/1 Running 0 69s
测试 Nginx:
# 创立 Nginx Deploymentkubectl create deployment nginx --image=nginx# 创立 ClusterIP Service,裸露 Nginx 端口kubectl create service clusterip nginx --tcp=80:80# 创立 Ingress Object# k3s 以 traefik 为默认 ingress controllercat <<EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false"spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80EOF# 拜访 Nginx Service# kubectl get pods 确认 nginx STATUS=Runningopen http://127.0.0.1:8080
测试 Dashboard:
# 创立 DashboardGITHUB_URL=https://github.com/kubernetes/dashboard/releasesVERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml# 配置 RBAC# admin usercat <<EOF > dashboard.admin-user.ymlapiVersion: v1kind: ServiceAccountmetadata: name: admin-user namespace: kubernetes-dashboardEOF# admin user rolecat <<EOF > dashboard.admin-user-role.ymlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: admin-userroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: admin-user namespace: kubernetes-dashboardEOF# 配置部署kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml# 获取 Bearer Tokenkubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token# 拜访代理kubectl proxy# 拜访 Dashboard# 输出 Token 登录open http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
删除集群:
k3d cluster delete mycluster
切换集群:
kubecm s
参考:
- k3s / Dashboard
- k3d / Exposing Services
MicroK8s
MicroK8s 是 Ubuntu 官网生态提供的 K8s 轻量版,适宜用于开发工作站、IoT、Edge、CI/CD。
以下搭建过程,是于 Ubuntu 18/20 的笔记,供参考。其余平台,请按照官网文档进行。
# 查看 hostname# 要求不含大写字母和下划线,不然按照后文批改hostname# 装置 microk8ssudo apt install snapd -ysnap info microk8ssudo snap install microk8s --classic --channel=1.21/stable# 增加用户组sudo usermod -a -G microk8s $USERsudo chown -f -R $USER ~/.kubenewgrp microk8sid $USER## 一些确保拉到镜像的办法# 配置代理(如果有)# MicroK8s / Installing behind a proxy# https://microk8s.io/docs/install-proxy# Issue: Pull images from others than k8s.gcr.io# https://github.com/ubuntu/microk8s/issues/472sudo vi /var/snap/microk8s/current/args/containerd-env HTTPS_PROXY=http://127.0.0.1:7890 NO_PROXY=10.1.0.0/16,10.152.183.0/24# 增加镜像(docker.io)# 镜像加速器# https://yeasy.gitbook.io/docker_practice/install/mirror# 还可改 args/ 里不同模板的 sandbox_imagesudo vi /var/snap/microk8s/current/args/containerd-template.toml [plugins."io.containerd.grpc.v1.cri"] [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://x.mirror.aliyuncs.com", "https://registry-1.docker.io", ] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"] endpoint = ["http://localhost:32000"]# 手动导入,见后文启用插件那# 重启服务microk8s stopmicrok8s start
查看状态:
$ microk8s statusmicrok8s is runninghigh-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: noneaddons: enabled: ha-cluster # Configure high availability on the current node disabled: ambassador # Ambassador API Gateway and Ingress cilium # SDN, fast with full network policy dashboard # The Kubernetes dashboard dns # CoreDNS fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring gpu # Automatic enablement of Nvidia CUDA helm # Helm 2 - the package manager for Kubernetes helm3 # Helm 3 - Kubernetes package manager host-access # Allow Pods connecting to Host services smoothly ingress # Ingress controller for external access istio # Core Istio service mesh services jaeger # Kubernetes Jaeger operator with its simple config keda # Kubernetes-based Event Driven Autoscaling knative # The Knative framework on Kubernetes. kubeflow # Kubeflow for easy ML deployments linkerd # Linkerd is a service mesh for Kubernetes and other frameworks metallb # Loadbalancer for your Kubernetes cluster metrics-server # K8s Metrics Server for API access to service metrics multus # Multus CNI enables attaching multiple network interfaces to pods openebs # OpenEBS is the open-source storage solution for Kubernetes openfaas # openfaas serverless framework portainer # Portainer UI for your Kubernetes cluster prometheus # Prometheus operator for monitoring and logging rbac # Role-Based Access Control for authorisation registry # Private image registry exposed on localhost:32000 storage # Storage class; allocates storage from host directory traefik # traefik Ingress controller for external access
如果 status
不正确时,能够如下排查谬误:
microk8s inspectgrep -r error /var/snap/microk8s/2346/inspection-report
如果要批改 hostname
:
# 改名称sudo hostnamectl set-hostname ubuntu-vm# 改 hostsudo vi /etc/hosts# 云主机的话,还要改下配置sudo vi /etc/cloud/cloud.cfg preserve_hostname: true # 如果只批改 preserve_hostname 不失效,那就间接正文掉 set/update_hostname cloud_init_modules: # - set_hostname # - update_hostname# 重启,验证失效sudo reboot
接着,启用些根底插件:
microk8s enable dns dashboard# 查看 Pods ,确认 runningmicrok8s kubectl get pods --all-namespaces# 不然,详情里看下谬误起因microk8s kubectl describe pod --all-namespaces
直到全副失常 running
:
$ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system kubernetes-dashboard-85fd7f45cb-snqrv 1/1 Running 1 15hkube-system dashboard-metrics-scraper-78d7698477-tmb7k 1/1 Running 1 15hkube-system metrics-server-8bbfb4bdb-wlf8g 1/1 Running 1 15hkube-system calico-node-p97kh 1/1 Running 1 6m18skube-system coredns-7f9c69c78c-255fg 1/1 Running 1 15hkube-system calico-kube-controllers-f7868dd95-st9p7 1/1 Running 1 16h
如果拉取镜像失败,能够 microk8s ctr image pull <mirror>
。或者,docker pull
后导入 containerd
:
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1docker save k8s.gcr.io/pause:3.1 > pause:3.1.tarmicrok8s ctr image import pause:3.1.tardocker pull calico/cni:v3.13.2docker save calico/cni:v3.13.2 > cni:v3.13.2.tarmicrok8s ctr image import cni:v3.13.2.tardocker pull calico/node:v3.13.2docker save calico/node:v3.13.2 > node:v3.13.2.tarmicrok8s ctr image import node:v3.13.2.tar
如果 calico-node
CrashLoopBackOff
,可能网络配置问题:
# 查具体日志microk8s kubectl logs -f -n kube-system calico-node-l5wl2 -c calico-node# 如果有 Unable to auto-detect an IPv4 address,那么 ip a 找出哪个网口有 IP 。批改:sudo vi /var/snap/microk8s/current/args/cni-network/cni.yaml - name: IP_AUTODETECTION_METHOD value: "interface=wlo.*"# 重启服务microk8s stop; microk8s start## 参考# Issue: Microk8s 1.19 not working on Ubuntu 20.04.1# https://github.com/ubuntu/microk8s/issues/1554# Issue: CrashLoopBackOff for calico-node pods# https://github.com/projectcalico/calico/issues/3094# Changing the pods CIDR in a MicroK8s cluster# https://microk8s.io/docs/change-cidr# MicroK8s IPv6 DualStack HOW-TO# https://discuss.kubernetes.io/t/microk8s-ipv6-dualstack-how-to/14507
而后,能够关上 Dashboard
看看:
# 获取 Token (未启用 RBAC 时)token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)microk8s kubectl -n kube-system describe secret $token# 转发端口microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443# 关上网页,输出 Token 登录xdg-open https://127.0.0.1:10443# 更多阐明 https://microk8s.io/docs/addon-dashboard# Issue: Your connection is not private# https://github.com/kubernetes/dashboard/issues/3804
更多操作,请浏览官网文档。本文之后仍以 k3d
为例。
筹备 K8s 利用
Go 利用
http_server.go
:
package mainimport ( "fmt" "log" "net/http")func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])}func main() { http.HandleFunc("/", handler) fmt.Println("HTTP Server running ...") log.Fatal(http.ListenAndServe(":3000", nil))}
构建镜像
http_server.dockerfile
:
FROM golang:1.17-alpine AS builderWORKDIR /appADD ./http_server.go /appRUN cd /app && go build http_server.goFROM alpine:3.14WORKDIR /appCOPY --from=builder /app/http_server /app/EXPOSE 3000ENTRYPOINT ./http_server
# 编译镜像docker build -t http_server:1.0 -f http_server.dockerfile .# 运行利用docker run --rm -p 3000:3000 http_server:1.0# 测试利用❯ curl http://127.0.0.1:3000/goHi there, I love go!
部署 K8s 利用
理解概念
- Kubernetes / Workloads
之后,参照官网教程,咱们将应用 Deployment 运行 Go 利用(无状态)。
导入镜像
首先,咱们手动导入镜像进集群:
docker save http_server:1.0 > http_server:1.0.tark3d image import http_server:1.0.tar -c mycluster
如果有本人的公有仓库,参见 k3d / Registries 进行配置。
创立 Deployment
# 配置 Deployment (2个正本)cat <<EOF > go-http-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: go-http labels: app: go-httpspec: replicas: 2 selector: matchLabels: app: go-http template: metadata: labels: app: go-http spec: containers: - name: go-http image: http_server:1.0 ports: - containerPort: 3000EOF# 利用 Deployment# --record: 记录命令kubectl apply -f go-http-deployment.yaml --record
查看 Deployment:
# 查看 Deployment❯ kubectl get deployNAME READY UP-TO-DATE AVAILABLE AGEnginx 1/1 1 1 2dgo-http 2/2 2 2 22s# 查看 Deployment 信息kubectl describe deploy go-http# 查看 Deployment 创立的 ReplicaSet (2个)kubectl get rs# 查看 Deployment 创立的 Pods (2个)kubectl get po -l app=go-http -o wide --show-labels# 查看某一 Pod 信息kubectl describe po go-http-5848d49c7c-wzmxh
- k8s / Deployments
创立 Service
# 创立 Service,名为 go-http# 将申请代理到 app=go-http, tcp=3000 的 Pod 上kubectl expose deployment go-http --name=go-http# 或cat <<EOF | kubectl create -f -apiVersion: v1kind: Servicemetadata: name: go-http labels: app: go-httpspec: selector: app: go-http ports: - protocol: TCP port: 3000 targetPort: 3000EOF# 查看 Servicekubectl get svc# 查看 Service 信息kubectl describe svc go-http# 查看 Endpoints 比照看看# kubectl get ep go-http# kubectl get po -l app=go-http -o wide# 删除 Service (如果)kubectl delete svc go-http
拜访 Service (DNS):
❯ kubectl run curl --image=radial/busyboxplus:curl -i --ttyIf you don't see a command prompt, try pressing enter.[ root@curl:/ ]$ nslookup go-httpServer: 10.43.0.10Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.localName: go-httpAddress 1: 10.43.102.17 go-http.default.svc.cluster.local[ root@curl:/ ]$ curl http://go-http:3000/goHi there, I love go!
裸露 Service (Ingress):
# 创立 Ingress Objectcat <<EOF | kubectl apply -f -apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: go-http annotations: ingress.kubernetes.io/ssl-redirect: "false"spec: rules: - http: paths: - path: /go pathType: Prefix backend: service: name: go-http port: number: 3000EOF# 查看 Ingresskubectl get ingress# 查看 Ingress 信息kubectl describe ingress go-http# 删除 Ingress (如果)kubectl delete ingress go-http
拜访 Service (Ingress):
❯ open http://127.0.0.1:8080/go# 或,❯ curl http://127.0.0.1:8080/goHi there, I love go!# Nginx 是在 http://127.0.0.1:8080
- k8s / Connecting Applications with Services
- k8s / Exposing an External IP Address
更新
仅当 Deployment Pod 模板产生扭转时,例如模板的标签或容器镜像被更新,才会触发 Deployment 上线。其余更新(如对 Deployment 执行扩缩容的操作)不会触发上线动作。
所以,咱们筹备 http_server:2.0
镜像导入集群,而后更新:
❯ kubectl set image deployment/go-http go-http=http_server:2.0 --recorddeployment.apps/go-http image updated
之后,能够查看上线状态:
# 查看上线状态❯ kubectl rollout status deployment/go-httpdeployment "go-http" successfully rolled out# 查看 ReplicaSet 状态:新的扩容,旧的缩容,实现更新❯ kubectl get rsNAME DESIRED CURRENT READY AGEgo-http-586694b4f6 2 2 2 10sgo-http-5848d49c7c 0 0 0 6d
测试服务:
❯ curl http://127.0.0.1:8080/goHi there v2, I love go!
回滚
查看 Deployment 订正历史:
❯ kubectl rollout history deployment.v1.apps/go-httpdeployment.apps/go-httpREVISION CHANGE-CAUSE1 kubectl apply --filename=go-http-deployment.yaml --record=true2 kubectl set image deployment/go-http go-http=http_server:2.0 --record=true# 查看订正信息kubectl rollout history deployment.v1.apps/go-http --revision=2
回滚到之前的订正版本:
# 回滚到上一版kubectl rollout undo deployment.v1.apps/go-http# 回滚到指定版本kubectl rollout undo deployment.v1.apps/go-http --to-revision=1
缩放
# 缩放 Deployment 的 ReplicaSet 数kubectl scale deployment.v1.apps/go-http --replicas=10# 如果集群启用了 Pod 的程度主动缩放,能够依据现有 Pods 的 CPU 利用率抉择上上限# Horizontal Pod Autoscaler Walkthrough# https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80
暂停、复原
# 暂停 Deploymentkubectl rollout pause deployment.v1.apps/go-http# 复原 Deploymentkubectl rollout resume deployment.v1.apps/go-http
期间能够更新 Deployment ,但不会触发上线。
删除
kubectl delete deployment go-http
金丝雀部署
灰度部署,用多标签辨别多个部署,新旧版可同时运行。部署新版时,用大量流量验证,没问题再全量更新。
- k8s / Canary Deployments
Helm 公布
Helm 是 K8s 的包管理工具,包格局称为 charts
。当初来公布咱们的 Go 服务吧。
装置 Helm
# macOSbrew install helm# Ubuntusudo snap install helm --classic
执行 helm
理解命令。
创立 Chart
helm create go-http
查看内容:
❯ tree go-http -aF --dirsfirstgo-http├── charts/ # 包依赖的 charts,称 subcharts├── templates/ # 包的 K8s 文件模板,用的 Go 模板│ ├── tests/│ │ └── test-connection.yaml│ ├── NOTES.txt # 包的帮忙文本│ ├── _helpers.tpl # 模板可重用的片段,模板里 include 援用│ ├── deployment.yaml│ ├── hpa.yaml│ ├── ingress.yaml│ ├── service.yaml│ └── serviceaccount.yaml├── .helmignore # 打包疏忽阐明├── Chart.yaml # 包的形容文件└── values.yaml # 变量默认值,可装置时笼罩,模板里 .Values 援用
批改内容:
批改
Chart.yaml
里的形容- 更多见 The Chart.yaml File
批改
values.yaml
里的变量- 批改
image
为公布的 Go 服务 - 批改
ingress
为true
,及一些配置 - 删除
serviceAccount
autoscaling
,templates/
里也搜寻删除相干内容
- 批改
批改
templates/
里的模板- 删除
tests/
,残余deployment.yaml
service.yaml
ingress.yaml
有用
- 删除
后果可见 start-k8s/helm/go-http。
查看谬误:
❯ helm lint --strict go-http==> Linting go-http[INFO] Chart.yaml: icon is recommended1 chart(s) linted, 0 chart(s) failed
渲染模板:
# --set 笼罩默认配置,或者用 -f 抉择自定的 values.yamlhelm template go-http-helm ./go-http \--set replicaCount=2 \--set "ingress.hosts[0].paths[0].path=/helm" \--set "ingress.hosts[0].paths[0].pathType=Prefix"
装置 Chart:
helm install go-http-helm ./go-http \--set replicaCount=2 \--set "ingress.hosts[0].paths[0].path=/helm" \--set "ingress.hosts[0].paths[0].pathType=Prefix"# 或,打包后装置helm package go-httphelm install go-http-helm go-http-1.0.0.tgz \--set replicaCount=2 \--set "ingress.hosts[0].paths[0].path=/helm" \--set "ingress.hosts[0].paths[0].pathType=Prefix"# 查看装置列表helm list
测试服务:
❯ kubectl get deploy go-http-helmNAME READY UP-TO-DATE AVAILABLE AGEgo-http-helm 2/2 2 2 2m42s❯ curl http://127.0.0.1:8080/helmHi there, I love helm!
卸载 Chart:
helm uninstall go-http-helm
- Helm / Chart Template Guide
公布 Chart
官网仓库 ArtifactHub 上有很多分享的 Helm charts 。可见 velkoz1108/helm-chart 把咱们的 Go 服务公布到 Hub 上。
- Helm / Chart Repository Guide
最初
开始 K8s 吧!本文样例在 ikuokuo/start-k8s。
GoCoding 集体实际的教训分享,可关注公众号!