前言:
晚期jenkins承当了kubernetes中的ci/cd全副性能Jenkins Pipeline演进,这里筹备将cd继续集成拆分进去到spinnaker!
当然了 失常的思路应该是将jenkins spinnaker的用户账号先买通集成ldap.spinnaker账号零碎曾经集成ldap.jenkins之前也做过相干的试验。这里对于jenkins集成ldap的步骤就先省略了。毕竟指标是拆分pipeline流水线实际。账号零碎 互通还没有那么有紧迫性!。当然了第一步我感觉还是少了镜像的扫描的步骤,先搞一波镜像的扫描!毕竟平安才是首位的
对于jenkins流水线pipeline的镜像扫描
注:image 镜像仓库应用了harbor
Trivy
harbor默认的镜像扫描器是Trivy。早的时候貌似是clair?记得
查看harbor的api (不能与流水线集成提供扫描报告)
看了一眼harbor 的api。harbor 的api能够间接scan进行扫描:
然而这里有个缺点:我想出报告间接展现在jenkins流水线中啊,GET也只能获取log,我总不能jenkins流水线集成了harbor中主动扫描,扫描实现了持续来harbor中登陆确认镜像有没有破绽吧?所以这个性能对外来说很是鸡肋。然而抱着学习的态度体验一下jenkins pipeline中镜像的主动扫描,首先参考了一下泽阳大佬的镜像主动清理的实例:
import groovy.json.JsonSlurper//Docker 镜像仓库信息registryServer = "harbor.layame.com"projectName = "${JOB_NAME}".split('-')[0]repoName = "${JOB_NAME}"imageName = "${registryServer}/${projectName}/${repoName}"harborAPI = ""//pipelinepipeline{ agent { node { label "build01"}} //设置构建触发器 triggers { GenericTrigger( causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("scan Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } }}
革新spinnaker-nginx-demo pipeline
仍旧拿我spinnaker-nginx-demo的实例去验证,参见:对于jenkins的配置-spinnaker-nginx-demo,批改pipeline如下:
//Docker 镜像仓库信息registryServer = "harbor.xxxx.com"projectName = "${JOB_NAME}".split('-')[0]repoName = "${JOB_NAME}"imageName = "${registryServer}/${projectName}/${repoName}"//pipelinepipeline{ agent { node { label "build01"}} //设置构建触发器 triggers { GenericTrigger( causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage("scan Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { harborAPI = "https://harbor.xxxx.com/api/v2.0/projects/${projectName}/repositories/${repoName}" apiURL = "artifacts/${data}/scan" sh """ curl -X POST "${harborAPI}/${apiURL}" -H "accept: application/json" -u ${username}:${password} """ } } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } }}
参考阳明大佬清理镜像的pipeline脚本进行了批改,减少了scan Image的stage! 其实都是参照harbor的api文档来的,更为具体的能够参考harbor的官网api。
触发jenkins构建
spinnaker-nginx-demo pipeline是gitlab触发的,更新gitlab仓库中轻易一个master分支的文件触发jenkins构建:
登陆harbor仓库验证:
ok验证胜利,当然了如果有其余的需要的能够参见harbor的api文档,当然了前提是harbor反对的性能......,不能jenkins中集成扫描报告,让我放弃了harbor中的Trivy,当然了也有可能是我对Trivy不熟,没有去深刻看一遍Trivy的文档,只是看了harbor的api.......
anchore-engine
anchore-engine helm的装置
anchore-engine,是无意间搜jenkins scan image这些关键词的时候网上看到的:https://cloud.tencent.com/developer/article/1666535.很不错的文章,而后看了一眼官网,有helm的装置形式:https://engine.anchore.io/docs/install/helm/,装置一下测试一下
[root@k8s-master-01 anchore-engine]# helm repo add anchore https://charts.anchore.io[root@k8s-master-01 anchore-engine]# helm repo list
注:哈哈哈 之前我搞过一遍啊申明一下 所以helm 仓库之前加过 嗯也装置过1.14.6的版本。然而没有胜利跟jenkins整合胜利就想抱着试试最新版本的想法.....。然而事实貌似战胜了我,预计是jenkins的插件太老了?(上面一步步验证,是本人没有深入研究.......其实是能够的)顺便温习一遍helm命令!
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine[root@k8s-master-01 anchore-engine]# helm repo update[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine[root@k8s-master-01 anchore-engine]# helm fetch anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# ls[root@k8s-master-01 anchore-engine]# tar zxvf anchore-engine-1.15.1.tgz[root@k8s-master-01 anchore-engine]# cd anchore-engine
vim values.yaml
就批改了一下存储大小设置了一下明码跟邮箱!
helm install anchore-engine -f values.yaml . -n anchore-engine
发现一个坑爹的.......为什么kubernetes的domain 都默认设置的cluster.local?看了一遍配置文件也没有找到批改的.......
jenkins的配置
jenkins首先要装置插件
配置:
系统管理-系统配置:
构建流水线:
因为这里是测试就先搞了一下应用Anchore Enine来欠缺DevSecOps工具链外面的demo(批改了一下构建节点,github仓库还有dockerhub仓库秘钥):
jenkins 新建pipeline工作anchore-enchore
pipeline { agent { node { label "build01"}} environment { registry = "duiniwukenaihe/spinnaker-cd" //仓库地址,用于把镜像push到镜像仓库。依照理论状况批改 registryCredential = 'duiniwukenaihe' //用于登陆镜像仓库的凭证,依照理论状况批改 } stages { //jenkins从代码仓库里下载代码 stage('Cloning Git') { steps { git 'https://github.com.cnpmjs.org/duiniwukenaihe/docker-dvwa.git' } } //构建镜像 stage('Build Image') { steps { script { app = docker.build(registry+ ":$BUILD_NUMBER") } } } //把镜像推送到仓库 stage('Push Image') { steps { script { docker.withRegistry('', registryCredential ) { app.push() } } } } //镜像扫描 stage('Container Security Scan') { steps { sh 'echo "'+registry+':$BUILD_NUMBER `pwd`/Dockerfile" > anchore_images' anchore engineRetries: "240", name: 'anchore_images' } } stage('Cleanup') { steps { sh script: "docker rmi " + registry+ ":$BUILD_NUMBER" } } }}
注:github.com批改为github.com.cnpmjs.org就是为了减速....毕竟墙裂无奈pull的动代码
运行pipeline工作
反正就是搞了好几次都是失败告终....,不求甚解,缓缓剥离找到问题ing.......
docker-compose 装置anchore-engine
依照教程应用Anchore Enine来欠缺DevSecOps工具链
搞了一个docker-compose的部署形式:
注:我的集群默认cri 是containerd,k8s-node-06节点是docker做运行时,且不参加调度,anchore-engine就筹备在这台服务器下面装置了!内网ip:10.0.4.18。
前提装置docker-compose:
docker-compose up -d
间接应用了默认的yaml文件并没有进行额定批改,比拟后期只是测试。
# curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml# docker-compose up -d
# This is a docker-compose file for development purposes. It refereneces unstable developer builds from the HEAD of master branch in https://github.com/anchore/anchore-engine# For a compose file intended for use with a released version, see https://engine.anchore.io/docs/quickstart/#---version: '2.1'volumes: anchore-db-volume: # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create anchore-db-volume" external: falseservices: # The primary API endpoint service api: image: anchore/anchore-engine:v1.0.0 depends_on: - db - catalog ports: - "8228:8228" logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=api - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "apiext"] # Catalog is the primary persistence and state manager of the system catalog: image: anchore/anchore-engine:v1.0.0 depends_on: - db logging: driver: "json-file" options: max-size: 100m expose: - 8228 environment: - ANCHORE_ENDPOINT_HOSTNAME=catalog - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "catalog"] queue: image: anchore/anchore-engine:v1.0.0 depends_on: - db - catalog expose: - 8228 logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=queue - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword command: ["anchore-manager", "service", "start", "simplequeue"] policy-engine: image: anchore/anchore-engine:v1.0.0 depends_on: - db - catalog expose: - 8228 logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=policy-engine - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword - ANCHORE_VULNERABILITIES_PROVIDER=grype command: ["anchore-manager", "service", "start", "policy_engine"] analyzer: image: anchore/anchore-engine:v1.0.0 depends_on: - db - catalog expose: - 8228 logging: driver: "json-file" options: max-size: 100m environment: - ANCHORE_ENDPOINT_HOSTNAME=analyzer - ANCHORE_ADMIN_PASSWORD=foobar - ANCHORE_DB_HOST=db - ANCHORE_DB_PASSWORD=mysecretpassword volumes: - /analysis_scratch command: ["anchore-manager", "service", "start", "analyzer"] db: image: "postgres:9" volumes: - anchore-db-volume:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=mysecretpassword expose: - 5432 logging: driver: "json-file" options: max-size: 100m healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"]# # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported# prometheus:# image: docker.io/prom/prometheus:latest# depends_on:# - api# volumes:# - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z# logging:# driver: "json-file"# options:# max-size: 100m# ports:# - "9090:9090"## # Uncomment this section to run a swagger UI service, for inspecting and interacting with the anchore engine API via a browser (http://localhost:8080 by default, change if needed in both sections below)# swagger-ui-nginx:# image: docker.io/nginx:latest# depends_on:# - api# - swagger-ui# ports:# - "8080:8080"# volumes:# - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z# logging:# driver: "json-file"# options:# max-size: 100m# swagger-ui:# image: docker.io/swaggerapi/swagger-ui# environment:# - URL=http://localhost:8080/v1/swagger.json# logging:# driver: "json-file"# options:# max-size: 100m#
[root@k8s-node-06 anchore]# docker-compose ps Name Command State Ports ------------------------------------------------------------------------------------------------anchore_analyzer_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp anchore_api_1 /docker-entrypoint.sh anch ... Up (healthy) 0.0.0.0:8228->8228/tcpanchore_catalog_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp anchore_db_1 docker-entrypoint.sh postgres Up (healthy) 5432/tcp anchore_policy-engine_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp anchore_queue_1 /docker-entrypoint.sh anch ... Up (healthy) 8228/tcp
批改jenkins配置
Pipeline Test:
谬误的猜想:
helm部署不能够初步预计是我的containerd不是docker的起因?又或者是服务端版本太高?
随之而来的问题:
如何扫描公有仓库镜像?
然而随之问题又来了:anchore-enchorepipeline中镜像仓库默认的是dockerhub,我的仓库是公有harbor仓库,spinnaker-nginx-demo的利用pipeline减少扫描都跑不起来.......
//Docker 镜像仓库信息registryServer = "harbor.xxxx.com"projectName = "${JOB_NAME}".split('-')[0]repoName = "${JOB_NAME}"imageName = "${registryServer}/${projectName}/${repoName}"//pipelinepipeline{ agent { node { label "build01"}} //设置构建触发器 triggers { GenericTrigger( causeString: 'Generic Cause', genericVariables: [[defaultValue: '', key: 'branchName', regexpFilter: '', value: '$.ref']], printContributedVariables: true, printPostContent: true, regexpFilterExpression: '', regexpFilterText: '', silentResponse: true, token: 'spinnaker-nginx-demo') } stages{ stage("CheckOut"){ steps{ script{ srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git" branchName = branchName - "refs/heads/" currentBuild.description = "Trigger by ${branchName}" println("${branchName}") checkout([$class: 'GitSCM', branches: [[name: "${branchName}"]], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'gitlab-admin-user', url: "${srcUrl}"]]]) } } } stage("Push Image "){ steps{ script{ withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) { sh """ sed -i -- "s/VER/${branchName}/g" app/index.html docker login -u ${username} -p ${password} ${registryServer} docker build -t ${imageName}:${data} . docker push ${imageName}:${data} docker rmi ${imageName}:${data} """ } } } } stage('Container Security Scan') { steps { script{ sh """ echo "开始扫描" echo "${imageName}:${data} ${WORKSPACE}/Dockerfile" > anchore_images """ anchore engineRetries: "360",forceAnalyze: true, name: 'anchore_images' } } } stage("Trigger File"){ steps { script{ sh """ echo IMAGE=${imageName}:${data} >trigger.properties echo ACTION=DEPLOY >> trigger.properties cat trigger.properties """ archiveArtifacts allowEmptyArchive: true, artifacts: 'trigger.properties', followSymlinks: false } } } }}
github issuse找到的灵感:
怎么回事?功夫不负有心人看了一遍github anchore仓库的issue:https://github.com/anchore/anchore-engine/issues/438,找到了解决办法......
减少公有仓库配置
[root@k8s-node-06 anchore]# docker exec -it d21c8ed1064d bash[anchore@d21c8ed1064d anchore-engine]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxx[anchore@d21c8ed1064d anchore-engine]$ anchore-cli --url http://10.0.4.18:8228/v1/ --u admin --p foobar --debug image add harbor.layame.com/spinnaker/spinnaker-nginx-demo:202111192008
嗯 我把我harbor的仓库加上了看一眼我 运行一下我的jenkins 貌似我的流水线就都能够出报告了
登陆anchore_api_1 容器验证:
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli image list
顺便批改一下helm搭建的anchore-enchore
同理 我当初狐疑我的helm部署的harbor也是这谬误...开始狐疑错了,批改一下试试!
[root@k8s-master-01 anchore-engine]# kubectl get pods -n anchore-engineNAME READY STATUS RESTARTS AGEanchore-engine-anchore-engine-analyzer-fcf9ffcc8-dv955 1/1 Running 0 10hanchore-engine-anchore-engine-api-7f98dc568-j6tsz 1/1 Running 0 10hanchore-engine-anchore-engine-catalog-754b996b75-q5hqg 1/1 Running 0 10hanchore-engine-anchore-engine-policy-745b6778f7-hbsvx 1/1 Running 0 10hanchore-engine-anchore-engine-simplequeue-695df4498-wgss4 1/1 Running 0 10hanchore-engine-postgresql-9cdbb5f7f-4dcnk 1/1 Running 0 10h[root@k8s-master-01 anchore-engine]# kubectl get svc -n anchore-engineNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEanchore-engine-anchore-engine-api ClusterIP 172.19.255.231 <none> 8228/TCP 10hanchore-engine-anchore-engine-catalog ClusterIP 172.19.254.163 <none> 8082/TCP 10hanchore-engine-anchore-engine-policy ClusterIP 172.19.254.91 <none> 8087/TCP 10hanchore-engine-anchore-engine-simplequeue ClusterIP 172.19.253.141 <none> 8083/TCP 10hanchore-engine-postgresql ClusterIP 172.19.252.126 <none> 5432/TCP 10h[root@k8s-master-01 anchore-engine]# kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=xxxxxx --env ANCHORE_CLI_URL=http://172.19.255.231:8228/v1[anchore@anchore-cli anchore-cli]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxxxx[anchore@anchore-cli anchore-cli]$ anchore-cli --url http://172.19.255.231:8228/v1/ --u admin --p xxxx --debug image add harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111192008
貌似也胜利了!颠覆一下我的运行时的假如or版本的问题
从新批改jenkins的配置为helm搭建anchore-engine的api地址,因为cluter.local的梗我很不喜爱间接应用了集群内service的地址:
运行jenkins 工作 spinnaker-nginx-demo pipeline
仍然是批改gitlab文件触发pipeline工作,很是遗憾,高危破绽检测未能通过FAIL,哈哈哈哈 然而流水线总算是跑通了:
比拟一下Trivy与anchore-engine
拿spinnaker-nginx-demo 107制品镜像来比照,制品标签为harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111201116:
anchore-engine报告:
what fack 为什么harbor的trivy扫描是没有破绽?
霎时陷入了要解决破绽的强迫症轮回中.......
总结一下:
- harbor自定镜像扫描插件tivy,嗯也能够抉择clair 貌似也能够与anchore-engine买通
- anchore-engine要add加入公有仓库,heml装置拼装地址记得批改cluster.local,如果是自定义集群
- anchore-engine比trivy的扫描更为严格
- 要长于应用--help命令:anchore-cli --help