在 Istio 1.8 中多集群反对的演变 一文中,咱们介绍了 4 种 Istio 多集群部署模型,并且简略介绍了单网络 Primary-Remote 部署模型的部署步骤。明天咱们通过对源码剖析,来介绍 Istio 如何反对多集群模式。
次要通过 istioctl 命令 和 Pilot-discovery 源码两局部来讲述,并且基于 Istio1.8 版本。
Istioctl 命令
Istioctl 提供了诸多对于多集群反对的命令。该代码位于 istioctl/pkg/multicluster
门路下,蕴含了如下子命令:
- apply:基于网格拓扑更新多集群网格中的集群
- describe:形容多集群网格的管制立体的状态
- generate:依据网格形容和运行时状态生成特定于集群的管制立体配置
以上三个命令,大家能够 -h,获取帮忙:
$ istioctl x multicluster -h
Commands to assist in managing a multi-cluster mesh [Deprecated, it will be removed in Istio 1.9]
Usage:
istioctl experimental multicluster [command]
Aliases:
multicluster, mc
Available Commands:
apply Update clusters in a multi-cluster mesh based on mesh topology
describe Describe status of the multi-cluster mesh's control plane'
generate generate a cluster-specific control plane configuration based on the mesh description and runtime state
Flags:
-h, --help help for multicluster
Global Flags:
--context string The name of the kubeconfig context to use
-i, --istioNamespace string Istio system namespace (default "istio-system")
-c, --kubeconfig string Kubernetes configuration file
-n, --namespace string Config namespace
Use "istioctl experimental multicluster [command] --help" for more information about a command.
- create-remote-secret:创立具备凭据的 secret,以容许 Istio 拜访近程 Kubernetes apiserver。
比方咱们在部署多集群模型中,肯定会执行如下的命令(此次演示近程集群名为 sgt-base-sg1-prod):
istioctl x create-remote-secret
--context="${CTX_REMOTE}"
--name=sgt-base-sg1-prod |
kubectl apply -f - --context="${CTX_CONTROL}"
该命令分为两局部:
- 针对近程集群操作:将会在近程集群创立 istio-system 命名空间,并在该命名空间下创立 istio-reader-service-account 和 istiod-service-account 两个服务账户,以及对这两个账户的 RBAC 相干受权,执行胜利后,返回管制集群所需的 Secret。
- 将 上一步骤返回的 Secret 利用到管制集群。
咱们看下实际操作返回 Secret 内容:
# This file is autogenerated, do not edit.
apiVersion: v1
kind: Secret
metadata:
annotations:
networking.istio.io/cluster: sgt-base-sg1-prod
creationTimestamp: null
labels:
istio/multiCluster: "true"
name: istio-remote-secret-sgt-base-sg1-prod
namespace: istio-system
stringData:
sgt-base-sg1-prod: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJ0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01Ea3lNekF5TlRJME0xb1hEVE13TURreU1UQXlOVEkwTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWFJCk5DcW1McGNjTENGNDNqTDZET1phNnhUMU5kbm9yNkpWR0w5a0FNNGMzVDZDZ1ZYOUpDbGxxdmVDQkRMclgremEKcGQwZ1orNFZqZUtHWk9jdklnc3p2dDV4TTJoWDBBZ1BQMFFDNnl2bnc5VXBrOHBNcDFLVkV1L3pUSXFPTlplcAp0NmlGcjIya1dUaWgwYmhIeDQwc3JoQXZjWXM2NStlb240QmhBYTBGR1dreWM4dUZqRmRnT2hYS3hzd01EdkRiCmUzenlMc3ZOb2NvT3V1U2JrR3hUNmtKeGhmdHI4dEZnWGllM2dYSFJnSitQUUN6UElCM1JZdEsxMGdROHB6T1UKOTAwb3p0TlllZGg4MUhZcjZSV0ZDb1FBMXlpN2xEL3BUWlo4UnRkZTZQWmt0bStFNnJkaEI2a0ZkZmFtY3U4MgptamlQZGxmYWVrSXFCTGxoa1NFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHUHEzVllkWmFJZFdOMDk5OW5TV1RIV0E0VkYKMzROZ1pEVEdHY3QvWUpNWmZGRnVnSjlqRVBMdTZiSklrZFVHcHNCbkhvNUFsTHJZTjU2dnFkL0MrVTlOc2R2NwpnQ0FBTlNDMVArYktUZmVmWGpQd1dhY0R2RCtTZWIrTHhGUmF3NWZyNDZJNEtTRE12RUZ0T3JaRmhWL3AvQkF5ClZJT01GMDF3aCtOa045OVlWMUZ0S1pLRnd6WGVaM3N3TXBCek50a2daYzlDMjhvdlR5TGNFT05ucGk0dDRmc28KSGpYdkJubUVvak5UcmZtL3F3M1l6Y3dBNXUzekRoRlFkTU5PWlFWVk1EVmhzOFZBOXhyRk1iUFhCSWRiZmZRSApva3QvWkJ0WHRwQm9qaGZmYlJSR0pRQTBFbTk0WTRGNEhhSlFMM2QwMGRoSy9mL1Fiak5BUVhFVFhqRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://88876557684F299B0ED2.xxx.ap-southeast-1.eks.amazonaws.com
name: sgt-base-sg1-prod
contexts:
- context:
cluster: sgt-base-sg1-prod
user: sgt-base-sg1-prod
name: sgt-base-sg1-prod
current-context: sgt-base-sg1-prod
kind: Config
preferences: {}
users:
- name: sgt-base-sg1-prod
user:
token: eyJhbGciOiJSUzI1NirbWFQRjFVeVI3WlZ2Qk9YQ0Qzb2FINl9xMkE5X0MzbXEwb2hVWFVnZjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJpc3Rpby1zeXN0ZW0iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiaXN0aW8tcmVhZGVyLXNlcnZpY2UtYWNjb3VudC10b2tlbi1wdHFmOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJpc3Rpby1yZWFkZXItc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzJlZDQwYzktNGNmNC00Y2EwLWI1YzYtZThhZTczNjFlMDI2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmlzdGlvLXN5c3RlbTppc3Rpby1yZWFkZXItc2VydmljZS1hY2NvdW50In0.WbtOZc0390Yq147gvOFdWsaxhEwAC7vaNzhKtlKIf9JXRIGZhkt91zPU_fJLGAlMlj9RSc5QMzQokLSvA_69fGlXnZpdiPvVBrmWJtOQ_tUNJCAL-MfBerZ1y7Kp6Itaw3j1t2M2Ksj5h1SuqfWdiBbNAwb5ehyVJoGpAxppSGdrLGbMWHH1iZCCz6T3WnPPmMfFktcgFDJYlHuuwRaIsuNgD-nUOrUM7-PQiv2sOGVy8EYbObl9AvcvlklZz5KSHfk6GkJ_RYYObFpy-M8ZOYEA2lTpeg5Wer65nlOXo_FYUQ1It4jsZdsuj9cctIQautT6ExhrG30oAhpamzKs8A
咱们能够看到该 secret,被 label istio/multiCluster: "true"
标记,在后续的 pilot-discovery 代码中,会对带有该标记的 secret 的进行解决。
为什么创立该 secret 很重要?
- 使管制立体可能验证来自近程集群中运行的工作负载的连贯申请。没有 API Server 拜访权限,管制立体将拒绝请求。
- 启用发现在近程集群中运行的服务端点的性能。
Pilot-discovery
Pilot-discovery 在其 server 构造体中,蕴含 multicluster
对象,该对象定义如下:
type Multicluster struct {
WatchedNamespaces string
DomainSuffix string
ResyncPeriod time.Duration
serviceController *aggregate.Controller
XDSUpdater model.XDSUpdater
metrics model.Metrics
endpointMode EndpointMode
m sync.Mutex // protects remoteKubeControllers
remoteKubeControllers map[string]*kubeController
networksWatcher mesh.NetworksWatcher
// fetchCaRoot maps the certificate name to the certificate
fetchCaRoot func() map[string]string
caBundlePath string
systemNamespace string
secretNamespace string
secretController *secretcontroller.Controller
syncInterval time.Duration
}
其蕴含近程 kube 控制器和多集群特定的属性。
在 pilot-discovery 组件 bootstrap 过程中,对该对象进行实例化。
if err := s.initClusterRegistries(args); err != nil {return nil, fmt.Errorf("error initializing cluster registries: %v", err)
}
依据传入的 RegistryOptions 参数,启动 secret 控制器以监督近程集群并初始化多集群构造。
func (s *Server) initClusterRegistries(args *PilotArgs) (err error) {if hasKubeRegistry(args.RegistryOptions.Registries) {log.Info("initializing Kubernetes cluster registry")
mc, err := controller.NewMulticluster(s.kubeClient,
args.RegistryOptions.ClusterRegistriesNamespace,
args.RegistryOptions.KubeOptions,
s.ServiceController(),
s.XDSServer,
s.environment)
if err != nil {log.Info("Unable to create new Multicluster object")
return err
}
s.multicluster = mc
}
return nil
}
该办法里的外围是 NewMulticluster 办法:
func NewMulticluster(kc kubernetes.Interface, secretNamespace string, opts Options,
serviceController *aggregate.Controller, xds model.XDSUpdater, networksWatcher mesh.NetworksWatcher) (*Multicluster, error) {remoteKubeController := make(map[string]*kubeController)
if opts.ResyncPeriod == 0 {
// make sure a resync time of 0 wasn't passed in.
opts.ResyncPeriod = 30 * time.Second
log.Info("Resync time was configured to 0, resetting to 30")
}
mc := &Multicluster{
WatchedNamespaces: opts.WatchedNamespaces,
DomainSuffix: opts.DomainSuffix,
ResyncPeriod: opts.ResyncPeriod,
serviceController: serviceController,
XDSUpdater: xds,
remoteKubeControllers: remoteKubeController,
networksWatcher: networksWatcher,
metrics: opts.Metrics,
fetchCaRoot: opts.FetchCaRoot,
caBundlePath: opts.CABundlePath,
systemNamespace: opts.SystemNamespace,
secretNamespace: secretNamespace,
endpointMode: opts.EndpointMode,
syncInterval: opts.GetSyncInterval(),}
mc.initSecretController(kc)
return mc, nil
}
对于 Multicluster 构造,其实现了如下 3 个次要办法:
- AddMemberCluster:作为增加近程集群时要调用的回调。此性能须要设置所有处理程序,以监督在近程集群上增加,删除或更改的资源。
- DeleteMemberCluster:当删除近程集群时,也就是某近程集群不再纳入到 mesh 中,要调用的回调。同时革除缓存,以删除近程集群资源。
- UpdateMemberCluster:该办法先执行 DeleteMemberCluster 操作,再执行 AddMemberCluster 操作。
以上三个办法会传递到 MultiCluster 对象中的 secret 控制器。
func (m *Multicluster) initSecretController(kc kubernetes.Interface) {
m.secretController = secretcontroller.StartSecretController(kc,
m.AddMemberCluster,
m.UpdateMemberCluster,
m.DeleteMemberCluster,
m.secretNamespace,
m.syncInterval)
}
该 secret 控制器监测 secret 变动,当然并不是对所有的 secret 变动都执行对应操作。当 secret 蕴含 istio/multiCluster: "true"
lable 的时候,表明该 secret 代表一个近程集群,才会做对应的操作,具体操作就是执行下面讲到的三个办法。
secretsInformer := cache.NewSharedIndexInformer(
&cache.ListWatch{ListFunc: func(opts meta_v1.ListOptions) (runtime.Object, error) {
opts.LabelSelector = MultiClusterSecretLabel + "=true"
return kubeclientset.CoreV1().Secrets(namespace).List(context.TODO(), opts)
},
WatchFunc: func(opts meta_v1.ListOptions) (watch.Interface, error) {
opts.LabelSelector = MultiClusterSecretLabel + "=true"
return kubeclientset.CoreV1().Secrets(namespace).Watch(context.TODO(), opts)
},
},
&corev1.Secret{}, 0, cache.Indexers{},
)
这样 istio 就实现了多集群的主动发现目标。
那么发现近程集群之后,istio 会做哪些操作那?
MultiCluster 对象中,蕴含 remoteKubeControllers 和 serviceController 两个外围对象。
remoteKubeControllers 是一个 map[string]*kubeController map 对象。Key 为近程集群 ID,值为 kubeController 指针。
type kubeController struct {
*Controller
stopCh chan struct{}}
kubeController 能够获取近程集群的 Service,Pod 信息,node 信息等,而后将其转换为 istio 外部模型对象。
serviceController 是一个 aggregate.Controller 对象,该控制器汇总不同注册表中的数据并监督更改。这里相当于咱们平时用到的注册核心。
type Controller struct {registries []serviceregistry.Instance
storeLock sync.RWMutex
meshHolder mesh.Holder
}
当咱们新增一个集群的时候,AddMemberCluster 办法中,会将新集群的 kubeController 实例化,并增加到 remoteKubeControllers 对象中。起一个协程运行该 Controller,而后将该近程集群注册到 serviceController 中,也就是管制集群开始对该近程集群进行资源对象发现。
当咱们删除一个近程集群的时候,DeleteMemberCluster 办法中,会将目标集群的 kubeController 从 remoteKubeControllers 中删除。并且告诉运行该 Controller 的协程退出,而后将该近程集群从注册核心 serviceController 中反注册。
总结
本文从源代码角度简略介绍了一下 istio 对于多集群的反对。