写在最前
这篇是对于如何应用 Flomesh[1] 服务网格来强化 Spring Cloud 的服务治理能力,升高 Spring Cloud 微服务架构落地服务网格的门槛,实现“自主可控”。
文档在 github[2] 上继续更新,欢送大家一起探讨:https://github.com/flomesh-io...。
架构
Architect
环境搭建
搭建 Kubernetes 环境,能够抉择 kubeadm 进行集群搭建。也能够抉择 minikube、k3s、Kind 等,本文应用 k3s。
应用 k3d[3] 装置 k3s[4]。k3d 将在 Docker 容器中运行 k3s,因而须要保障曾经装置了 Docker。
$ k3d cluster create spring-demo -p "81:80@loadbalancer" --k3s-server-arg '--no-deploy=traefik'
装置 Flomesh
从仓库 https://github.com/flomesh-io/flomesh-bookinfo-demo.git
克隆代码。进入到 flomesh-bookinfo-demo/kubernetes
目录。
所有 Flomesh 组件以及用于 demo 的 yamls 文件都位于这个目录中。
装置 Cert Manager
$ kubectl apply -f artifacts/cert-manager-v1.3.1.yamlcustomresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io createdcustomresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io createdcustomresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io createdcustomresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io createdcustomresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io createdcustomresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io creatednamespace/cert-manager createdserviceaccount/cert-manager-cainjector createdserviceaccount/cert-manager createdserviceaccount/cert-manager-webhook createdclusterrole.rbac.authorization.k8s.io/cert-manager-cainjector createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim createdclusterrole.rbac.authorization.k8s.io/cert-manager-view createdclusterrole.rbac.authorization.k8s.io/cert-manager-edit createdclusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io createdclusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io createdclusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews createdrole.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection createdrole.rbac.authorization.k8s.io/cert-manager:leaderelection createdrole.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving createdrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection createdrolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection createdrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving createdservice/cert-manager createdservice/cert-manager-webhook createddeployment.apps/cert-manager-cainjector createddeployment.apps/cert-manager createddeployment.apps/cert-manager-webhook createdmutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
留神: 要保障 cert-manager
命名空间中所有的 pod 都失常运行:
$ kubectl get pod -n cert-managerNAME READY STATUS RESTARTS AGEcert-manager-webhook-56fdcbb848-q7fn5 1/1 Running 0 98scert-manager-59f6c76f4b-z5lgf 1/1 Running 0 98scert-manager-cainjector-59f76f7fff-flrr7 1/1 Running 0 98s
装置 Pipy Operator
$ kubectl apply -f artifacts/pipy-operator.yaml
执行完命令后会看到相似的后果:
namespace/flomesh createdcustomresourcedefinition.apiextensions.k8s.io/proxies.flomesh.io createdcustomresourcedefinition.apiextensions.k8s.io/proxyprofiles.flomesh.io createdserviceaccount/operator-manager createdrole.rbac.authorization.k8s.io/leader-election-role createdclusterrole.rbac.authorization.k8s.io/manager-role createdclusterrole.rbac.authorization.k8s.io/metrics-reader createdclusterrole.rbac.authorization.k8s.io/proxy-role createdrolebinding.rbac.authorization.k8s.io/leader-election-rolebinding createdclusterrolebinding.rbac.authorization.k8s.io/manager-rolebinding createdclusterrolebinding.rbac.authorization.k8s.io/proxy-rolebinding createdconfigmap/manager-config createdservice/operator-manager-metrics-service createdservice/proxy-injector-svc createdservice/webhook-service createddeployment.apps/operator-manager createddeployment.apps/proxy-injector createdcertificate.cert-manager.io/serving-cert createdissuer.cert-manager.io/selfsigned-issuer createdmutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration createdmutatingwebhookconfiguration.admissionregistration.k8s.io/proxy-injector-webhook-cfg createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created
留神:要保障 flomesh
命名空间中所有的 pod 都失常运行:
$ kubectl get pod -n flomeshNAME READY STATUS RESTARTS AGEproxy-injector-5bccc96595-spl6h 1/1 Running 0 39soperator-manager-c78bf8d5f-wqgb4 1/1 Running 0 39s
装置 Ingress 控制器:ingress-pipy
$ kubectl apply -f ingress/ingress-pipy.yamlnamespace/ingress-pipy createdcustomresourcedefinition.apiextensions.k8s.io/ingressparameters.flomesh.io createdserviceaccount/ingress-pipy createdrole.rbac.authorization.k8s.io/ingress-pipy-leader-election-role createdclusterrole.rbac.authorization.k8s.io/ingress-pipy-role createdrolebinding.rbac.authorization.k8s.io/ingress-pipy-leader-election-rolebinding createdclusterrolebinding.rbac.authorization.k8s.io/ingress-pipy-rolebinding createdconfigmap/ingress-config createdservice/ingress-pipy-cfg createdservice/ingress-pipy-controller createdservice/ingress-pipy-defaultbackend createdservice/webhook-service createddeployment.apps/ingress-pipy-cfg createddeployment.apps/ingress-pipy-controller createddeployment.apps/ingress-pipy-manager createdcertificate.cert-manager.io/serving-cert createdissuer.cert-manager.io/selfsigned-issuer createdmutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration configuredvalidatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration configured
查看 ingress-pipy
命名空间下 pod 的状态:
$ kubectl get pod -n ingress-pipyNAME READY STATUS RESTARTS AGEsvclb-ingress-pipy-controller-8pk8k 1/1 Running 0 71singress-pipy-cfg-6bc649cfc7-8njk7 1/1 Running 0 71singress-pipy-controller-76cd866d78-m7gfp 1/1 Running 0 71singress-pipy-manager-5f568ff988-tw5w6 0/1 Running 0 70s
至此,你曾经胜利装置 Flomesh 的所有组件,包含 operator 和 ingress 控制器。
中间件
Demo 须要用到中间件实现日志和统计数据的存储,这里为了方便使用 pipy 进行 mock:间接在控制台中打印数据。
另外,服务治理相干的配置有 mock 的 pipy config 服务提供。
log & metrics
$ cat > middleware.js <<EOFpipy().listen(8123) .link('mock').listen(9001) .link('mock').pipeline('mock') .decodeHttpRequest() .replaceMessage( req => ( console.log(req.body.toString()), new Message('OK') ) ) .encodeHttpResponse()EOF$ docker run --rm --name middleware --entrypoint "pipy" -v ${PWD}:/script -p 8123:8123 -p 9001:9001 flomesh/pipy-pjs:0.4.0-118 /script/middleware.js
pipy config
$ cat > mock-config.json <<EOF{ "ingress": {}, "inbound": { "rateLimit": -1, "dataLimit": -1, "circuitBreak": false, "blacklist": [] }, "outbound": { "rateLimit": -1, "dataLimit": -1 }}EOF$ cat > mock.js <<EOFpipy({ _CONFIG_FILENAME: 'mock-config.json', _serveFile: (req, filename, type) => ( new Message( { bodiless: req.head.method === 'HEAD', headers: { 'etag': os.stat(filename)?.mtime | 0, 'content-type': type, }, }, req.head.method === 'HEAD' ? null : os.readFile(filename), ) ), _router: new algo.URLRouter({ '/config': req => _serveFile(req, _CONFIG_FILENAME, 'application/json'), '/*': () => new Message({ status: 404 }, 'Not found'), }),})// Config.listen(9000) .decodeHttpRequest() .replaceMessage( req => ( _router.find(req.head.path)(req) ) ) .encodeHttpResponse()EOF$ docker run --rm --name mock --entrypoint "pipy" -v ${PWD}:/script -p 9000:9000 flomesh/pipy-pjs:0.4.0-118 /script/mock.js
运行 Demo
Demo 运行在另一个独立的命名空间 flomesh-spring
中,执行命令 kubectl apply -f base/namespace.yaml
来创立该命名空间。如果你 describe
该命名空间你会发现其应用了 flomesh.io/inject=true
标签。
这个标签告知 operator 的 admission webHook 拦挡标注的命名空间下 pod 的创立。
$ kubectl describe ns flomesh-springName: flomesh-springLabels: app.kubernetes.io/name=spring-mesh app.kubernetes.io/version=1.19.0 flomesh.io/inject=true kubernetes.io/metadata.name=flomesh-springAnnotations: <none>Status: ActiveNo resource quota.No LimitRange resource.
咱们首先看下 Flomesh 提供的 CRD ProxyProfile
。这个 demo 中,其定义了 sidecar 容器片段以及所应用的的脚本。查看 sidecar/proxy-profile.yaml
获取更多信息。执行上面的命令,创立 CRD 资源。
$ kubectl apply -f sidecar/proxy-profile.yaml
查看是否创立胜利:
$ kubectl get pf -o wideNAME NAMESPACE DISABLED SELECTOR CONFIG AGEproxy-profile-002-bookinfo flomesh-spring false {"matchLabels":{"sys":"bookinfo-samples"}} {"flomesh-spring":"proxy-profile-002-bookinfo-fsmcm-b67a9e39-0418"} 27s
As the services has startup dependencies, you need to deploy it one by one following the strict order. Before starting, check the Endpoints section of base/clickhouse.yaml.
提供中间件的拜访 endpoid,将 base/clickhouse.yaml
、base/metrics.yaml
和 base/config.yaml
中的 ip 地址改为本机的 ip 地址(不是 127.0.0.1)。
批改之后,执行如下命令:
$ kubectl apply -f base/clickhouse.yaml$ kubectl apply -f base/metrics.yaml$ kubectl apply -f base/config.yaml$ kubectl get endpoints samples-clickhouse samples-metrics samples-configNAME ENDPOINTS AGEsamples-clickhouse 192.168.1.101:8123 3msamples-metrics 192.168.1.101:9001 3ssamples-config 192.168.1.101:9000 3m
部署注册核心
$ kubectl apply -f base/discovery-server.yaml
查看注册核心 pod 的状态,确保 3 个容器都运行失常。
$ kubectl get podNAME READY STATUS RESTARTS AGEsamples-discovery-server-v1-85798c47d4-dr72k 3/3 Running 0 96s
部署配置核心
$ kubectl apply -f base/config-service.yaml
部署 API 网关以及 bookinfo 相干的服务
$ kubectl apply -f base/bookinfo-v1.yaml$ kubectl apply -f base/bookinfo-v2.yaml$ kubectl apply -f base/productpage-v1.yaml$ kubectl apply -f base/productpage-v2.yaml
查看 pod 状态,能够看到所有 pod 都注入了容器。
$ kubectl get podssamples-discovery-server-v1-85798c47d4-p6zpb 3/3 Running 0 19hsamples-config-service-v1-84888bfb5b-8bcw9 1/1 Running 0 19hsamples-api-gateway-v1-75bb6456d6-nt2nl 3/3 Running 0 6h43msamples-bookinfo-ratings-v1-6d557dd894-cbrv7 3/3 Running 0 6h43msamples-bookinfo-details-v1-756bb89448-dxk66 3/3 Running 0 6h43msamples-bookinfo-reviews-v1-7778cdb45b-pbknp 3/3 Running 0 6h43msamples-api-gateway-v2-7ddb5d7fd9-8jgms 3/3 Running 0 6h37msamples-bookinfo-ratings-v2-845d95fb7-txcxs 3/3 Running 0 6h37msamples-bookinfo-reviews-v2-79b4c67b77-ddkm2 3/3 Running 0 6h37msamples-bookinfo-details-v2-7dfb4d7c-jfq4j 3/3 Running 0 6h37msamples-bookinfo-productpage-v1-854675b56-8n2xd 1/1 Running 0 7m1ssamples-bookinfo-productpage-v2-669bd8d9c7-8wxsf 1/1 Running 0 6m57s
增加 Ingress 规定
执行如下命令增加 Ingress 规定。
$ kubectl apply -f ingress/ingress.yaml
测试前的筹备
拜访 demo 服务都要通过 ingress 控制器。因而须要先获取 LB 的 ip 地址。
//Obtain the controller IP//Here, we append port. ingressAddr=`kubectl get svc ingress-pipy-controller -n ingress-pipy -o jsonpath='{.spec.clusterIP}'`:81
这里咱们应用了是 k3d 创立的 k3s,命令中退出了 -p 81:80@loadbalancer
选项。咱们能够应用 127.0.0.1:81
来拜访 ingress 控制器。这里执行命令 ingressAddr=127.0.0.1:81
。
Ingress 规定中,咱们为每个规定指定了 host
,因而每个申请中须要通过 HTTP 申请头 Host
提供对应的 host
。
或者在 /etc/hosts
增加记录:
$ kubectl get ing ingress-pipy-bookinfo -n flomesh-spring -o jsonpath="{range .spec.rules[*]}{.host}{'\n'}"api-v1.flomesh.cnapi-v2.flomesh.cnfe-v1.flomesh.cnfe-v2.flomesh.cn//增加记录到 /etc/hosts127.0.0.1 api-v1.flomesh.cn api-v2.flomesh.cn fe-v1.flomesh.cn fe-v2.flomesh.cn
验证
$ curl http://127.0.0.1:81/actuator/health -H 'Host: api-v1.flomesh.cn'{"status":"UP","groups":["liveness","readiness"]}//OR$ curl http://api-v1.flomesh.cn:81/actuator/health{"status":"UP","groups":["liveness","readiness"]}
测试
灰度
在 v1 版本的服务中,咱们为 book 增加 rating 和 review。
# rate a book$ curl -X POST http://$ingressAddr/bookinfo-ratings/ratings \ -H "Content-Type: application/json" \ -H "Host: api-v1.flomesh.cn" \ -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","rating":3}' $ curl http://$ingressAddr/bookinfo-ratings/ratings/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host: api-v1.flomesh.cn"# review a book$ curl -X POST http://$ingressAddr/bookinfo-reviews/reviews \ -H "Content-Type: application/json" \ -H "Host: api-v1.flomesh.cn" \ -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"2099a055-1e21-46ef-825e-9e0de93554ea","review":"This was OK.","rating":3}'$ curl http://$ingressAddr/bookinfo-reviews/reviews/2099a055-1e21-46ef-825e-9e0de93554ea -H "Host: api-v1.flomesh.cn"
执行下面的命令之后,咱们能够在浏览器中拜访前端服务(http://fe-v1.flomesh.cn:81/productpage?u=normal
、 http://fe-v2.flomesh.cn:81/productpage?u=normal
),只有 v1 版本的前端中能力看到方才增加的记录。
v1
v2
熔断
这里熔断咱们通过批改 mock-config.json
中的 inbound.circuitBreak
为 true
,来将服务强制开启熔断:
{ "ingress": {}, "inbound": { "rateLimit": -1, "dataLimit": -1, "circuitBreak": true, //here "blacklist": [] }, "outbound": { "rateLimit": -1, "dataLimit": -1 }}
$ curl http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'HTTP/1.1 503 Service UnavailableConnection: keep-aliveContent-Length: 27Service Circuit Break Open
限流
批改 pipy config 的配置,将 inbound.rateLimit
设置为 1。
{ "ingress": {}, "inbound": { "rateLimit": 1, //here "dataLimit": -1, "circuitBreak": false, "blacklist": [] }, "outbound": { "rateLimit": -1, "dataLimit": -1 }}
咱们应用 wrk
模仿发送申请,20 个连贯、20 个申请、继续 30s:
$ wrk -t20 -c20 -d30s --latency http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'Running 30s test @ http://127.0.0.1:81/actuator/health 20 threads and 20 connections Thread Stats Avg Stdev Max +/- Stdev Latency 951.51ms 206.23ms 1.04s 93.55% Req/Sec 0.61 1.71 10.00 93.55% Latency Distribution 50% 1.00s 75% 1.01s 90% 1.02s 99% 1.03s 620 requests in 30.10s, 141.07KB readRequests/sec: 20.60Transfer/sec: 4.69KB
从后果来看 20.60 req/s,即每个连贯 1 req/s。
黑白名单
将 pipy config 的 mock-config.json
做如下批改:ip 地址应用的是 ingress controller 的 pod ip。
$ kgpo -n ingress-pipy ingress-pipy-controller-76cd866d78-4cqqn -o jsonpath='{.status.podIP}'10.42.0.78
{ "ingress": {}, "inbound": { "rateLimit": -1, "dataLimit": -1, "circuitBreak": false, "blacklist": ["10.42.0.78"] //here }, "outbound": { "rateLimit": -1, "dataLimit": -1 }}
还是拜访网关的接口
curl http://$ingressAddr/actuator/health -H 'Host: api-v1.flomesh.cn'HTTP/1.1 503 Service Unavailablecontent-type: text/plainConnection: keep-aliveContent-Length: 20Service Unavailable
援用链接
[1]
Flomesh: _https://flomesh.cn/_[2]
github: _https://github.com/flomesh-io...[3]
k3d: _https://k3d.io/_[4]
k3s: _https://github.com/k3s-io/k3s_