关于golang:operatorsdk-demo-试一试

49次阅读

共计 4389 个字符,预计需要花费 11 分钟才能阅读完成。

operator-sdk demo

参考文档

Golang Based Operator Tutorial

构建一个 demo project

前提

  • 装置 operator-sdk operator-sdk release
  • 有一个可拜访的 v1.11.3+ 版本的 kubernetes 集群(若应用 apiextensions.k8s.io/v1 CRDs,需 v1.16.0+)
  • 以 admin 身份拜访 kubernetes 集群

初始化我的项目

env:

$ operator-sdk version
operator-sdk version: "v1.3.0", commit: "1abf57985b43bf6a59dcd18147b3c574fa57d3f6", kubernetes version: "1.19.4", go version: "go1.15.5", GOOS: "linux", GOARCH: "amd64"
​
$ mkdir -p ${HOME}/golang_codes/operator_sdk_app/demo-operator
$ cd ${HOME}/golang_codes/operator_sdk_app/demo-operator

init:

$ operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.7.0
go: downloading ...
Update go.mod:
$ go mod tidy
go: downloading ...
Running make:
$ make
go: creating new go.mod: module tmp
Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.4.1
go: downloading ...
go fmt ./...
go vet ./...
go build -o bin/manager main.go

理解我的项目目录构造

The --repo=<path> flag is required when creating a project outside of $GOPATH/src, as scaffolded files require a valid module path.

下面这段话没懂 =。=

a. Manager:

主程序 main.go 初始化 & 运行 Manager。Manager 为 custom resource API defintions 注册 _Scheme_、设置并运行 controller,webhooks。Kubebuilder entrypoint doc Manager 能够限度所有 controller 监听资源的 namespace:

mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})
​
// 默认监听 operator 所在的 namespace
​
// 若须要监听所有 namespace,设置为空:mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""})
​
// 监听一组特定的 namespace:var namespaces []string // List of Namespaces
// Create a new Cmd to provide shared dependencies and start components
mgr, err := ctrl.NewManager(cfg, manager.Options{NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
})

b. Operator scope:

namespace-scope vs cluster-scope:operator scope

create api & controller:

a. 多组 API:

$ operator-sdk edit --multigroup=true

_PROJECT _文件更新为:

domain: example.com
layout: go.kubebuilder.io/v3
multigroup: true
...

理解 multi-group migration

b. 创立:

$ operator-sdk create api --group=cache --version=v1alpha1 --kind=Memcached --resource=true --controller=true
Writing scaffold for you to edit...
api/v1alpha1/memcached_types.go
controllers/memcached_controller.go
Running make:
$ make
/root/golang_codes/operator_sdk_app/demo-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go

理解 CRD API 标准
理解 kubebuilder api
理解 kubebuilder controller

空虚逻辑

批改 CR API:

批改 CR 类型 (*_types.go)。给 CR 定义增加正文:_ +kubebuilder:subresource:status_

Add the +kubebuilder:subresource:status marker to add a status subresource to the CRD manifest so that the controller can update the CR status without changing the rest of the CR object.

status 子资源??同步更新 deepcopy 接口实现:_ make generate _

生成 CRD 清单:

CR API 定义好之后,生成 CRD 清单:_ make manifests _

OpenAPI 校验:

OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached Custom Resource when it is created or updated. Markers (annotations) are available to configure validations for your API. These markers will always have a +kubebuilder:validation prefix. Usage of markers in API code is discussed in the kubebuilder CRD generation and marker documentation. A full list of OpenAPIv3 validation markers can be found here. To learn more about OpenAPI v3.0 validation schemas in CRDs, refer to the Kubernetes Documentation.

实现控制器:

reconciliation logic.

控制器逻辑实现

a. 控制器监听资源:

b. 协调循环:

c. 定义权限 & 生成 RBAC 清单:

部署运行

创立资源:

make install

配置测试环境:

a. 集群外

暂未 try

b. 集群内

make docker-build IMG=my/memcached-operator:v0.0.1

部署 operator:

make deploy IMG=my/memcached-operator:v0.0.1
留神: 以上步骤中未蕴含 webhook 准入校验的内容,可临时 敞开准入控制器,敞开时批改 /etc/kubernetes/manifests/kube-apiserver.yaml 文件后,需重启 kubelet 失效。

部署实现之后,查看:

$ kubectl -n demo-operator-system get cm
NAME                           DATA   AGE
demo-operator-manager-config   1      4h6m
f1c5ece8.example.com           0      4h5m
$ kubectl -n demo-operator-system get pods
NAME                                                READY   STATUS    RESTARTS   AGE
demo-operator-controller-manager-57d894fbfc-gjv9b   2/2     Running   0          4h6m

pod 内运行了两个容器:kube-rbac-proxy, manager

OLM 部署 operator:

todo..

一些问题:

1. 构建镜像时执行 go mod download 失败

(1) 批改 Dockerfile,增加环境变量:

ENV GO111MODULE=on 
 GOPROXY=https://goproxy.cn,direct

(2) 若在虚机上 docker build,增加参数使共享节点网络:

dokcer build --network=host -t ...

2. 反复构建镜像之后产生 <none>:<none> 的镜像

# 查看
$ docker images -a | grep none
​
# 删除 "坏" 的 <none>:<none> 镜像
#docker rmi $(docker images -f "dangling=true" -q)
# 清理 dangling 镜像
$ docker image prune
​
# 换个版本从新构建
docker build -t ...

3. 防火墙导致拉取镜像失败

镜像 gcr.io/distroless/static:nonroot
没找到可用的 http_proxy (~▽~)”
应用代替镜像:gengweifeng/gcr-io-distroless-static-nonroot – Docker Hub

4. 在集群中构建镜像 & 部署时,pod 拉取镜像失败

首先确认 imagePullPolicy: IfNotPresent
(1) 若在 kubernetes 集群:
本地构建镜像,导致仅集群中某一个节点本地有该镜像,其余节点无。拷贝镜像,或限度 pod 节点亲和性解决。
(2) 若在 minikube:
构建镜像前执行:eval $(minikube docker-env)

正文完
 0