关于cicd:kubernetes-基于jenkins-spinnaker的cicd实践一增加制品镜像扫描

9次阅读

共计 15712 个字符,预计需要花费 40 分钟才能阅读完成。

前言:

晚期 jenkins 承当了 kubernetes 中的 ci/cd 全副性能 Jenkins Pipeline 演进, 这里筹备将 cd 继续集成拆分进去到 spinnaker!
当然了 失常的思路应该是将 jenkins spinnaker 的用户账号先买通集成 ldap.spinnaker 账号零碎曾经集成 ldap.jenkins 之前也做过相干的试验。这里对于 jenkins 集成 ldap 的步骤就先省略了。毕竟指标是拆分 pipeline 流水线实际。账号零碎 互通还没有那么有紧迫性!。当然了第一步我感觉还是少了镜像的扫描的步骤,先搞一波镜像的扫描!毕竟平安才是首位的

对于 jenkins 流水线 pipeline 的镜像扫描

注:image 镜像仓库应用了 harbor

Trivy

harbor 默认的镜像扫描器是Trivy。早的时候貌似是clair? 记得

查看 harbor 的 api (不能与流水线集成提供扫描报告)

看了一眼 harbor 的 api。harbor 的 api 能够间接 scan 进行扫描:


然而这里有个缺点:我想出报告间接展现在 jenkins 流水线中啊,GET也只能获取 log, 我总不能 jenkins 流水线集成了 harbor 中主动扫描,扫描实现了持续来 harbor 中登陆确认镜像有没有破绽吧?所以这个性能对外来说很是鸡肋。然而抱着学习的态度体验一下 jenkins pipeline 中镜像的主动扫描, 首先参考了一下泽阳大佬的镜像主动清理的实例:

import groovy.json.JsonSlurper

//Docker 镜像仓库信息
registryServer = "harbor.layame.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"
harborAPI = ""
//pipeline
pipeline{agent { node { label "build01"}}


  // 设置构建触发器
    triggers {
          GenericTrigger( causeString: 'Generic Cause', 
                        genericVariables: [[defaultValue: '', key:'branchName', regexpFilter:'', value: '$.ref']],         
                        printContributedVariables: true, 
                        printPostContent: true, 
                        regexpFilterExpression: '', 
                        regexpFilterText: '', 
                        silentResponse: true, 
                        token: 'spinnaker-nginx-demo')
    }


    stages{stage("CheckOut"){
            steps{
                script{
                              srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
                              branchName = branchName - "refs/heads/"
                              currentBuild.description = "Trigger by ${branchName}"
                    println("${branchName}")
                    checkout([$class: 'GitSCM', 
                              branches: [[name: "${branchName}"]], 
                              doGenerateSubmoduleConfigurations: false, 
                              extensions: [], 
                              submoduleCfg: [], 
                              userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
                                                   url: "${srcUrl}"]]])
                }
            }
        }

        stage("Push Image"){
            steps{
                script{withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {sh """sed -i --"s/VER/${branchName}/g" app/index.html
                           docker login -u ${username} -p ${password} ${registryServer}
                           docker build -t ${imageName}:${data}  .
                           docker push ${imageName}:${data}
                           docker rmi ${imageName}:${data}

                        """
                    }
                }
            }
        }
        stage("scan Image"){
            steps{
                script{withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {sh """sed -i --"s/VER/${branchName}/g" app/index.html
                           docker login -u ${username} -p ${password} ${registryServer}
                           docker build -t ${imageName}:${data}  .
                           docker push ${imageName}:${data}
                           docker rmi ${imageName}:${data}

                        """
                    }
                }
            }
        }
        stage("Trigger File"){
            steps {
                script{
                    sh """
                        echo IMAGE=${imageName}:${data} >trigger.properties
                        echo ACTION=DEPLOY >> trigger.properties
                        cat trigger.properties
                    """archiveArtifacts allowEmptyArchive: true, artifacts:'trigger.properties', followSymlinks: false
                }
            }
        }

    }
}

革新 spinnaker-nginx-demo pipeline

仍旧拿我 spinnaker-nginx-demo 的实例去验证,参见:对于 jenkins 的配置 -spinnaker-nginx-demo,批改 pipeline 如下:

    //Docker 镜像仓库信息
registryServer = "harbor.xxxx.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"

//pipeline
pipeline{agent { node { label "build01"}}


  // 设置构建触发器
    triggers {
          GenericTrigger( causeString: 'Generic Cause', 
                        genericVariables: [[defaultValue: '', key:'branchName', regexpFilter:'', value: '$.ref']],         
                        printContributedVariables: true, 
                        printPostContent: true, 
                        regexpFilterExpression: '', 
                        regexpFilterText: '', 
                        silentResponse: true, 
                        token: 'spinnaker-nginx-demo')
    }


    stages{stage("CheckOut"){
            steps{
                script{
                              srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
                              branchName = branchName - "refs/heads/"
                              currentBuild.description = "Trigger by ${branchName}"
                    println("${branchName}")
                    checkout([$class: 'GitSCM', 
                              branches: [[name: "${branchName}"]], 
                              doGenerateSubmoduleConfigurations: false, 
                              extensions: [], 
                              submoduleCfg: [], 
                              userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
                                                   url: "${srcUrl}"]]])
                }
            }
        }

        stage("Push Image"){
            steps{
                script{withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {sh """sed -i --"s/VER/${branchName}/g" app/index.html
                           docker login -u ${username} -p ${password} ${registryServer}
                           docker build -t ${imageName}:${data}  .
                           docker push ${imageName}:${data}
                           docker rmi ${imageName}:${data}

                        """
                    }
                }
            }
        }
        stage("scan Image"){
            steps{
                script{withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {harborAPI = "https://harbor.xxxx.com/api/v2.0/projects/${projectName}/repositories/${repoName}"
                        apiURL = "artifacts/${data}/scan"
                        sh """curl -X POST"${harborAPI}/${apiURL}"-H"accept: application/json"-u ${username}:${password}"""
                    }
                }
            }
        }
        stage("Trigger File"){
            steps {
                script{
                    sh """
                        echo IMAGE=${imageName}:${data} >trigger.properties
                        echo ACTION=DEPLOY >> trigger.properties
                        cat trigger.properties
                    """archiveArtifacts allowEmptyArchive: true, artifacts:'trigger.properties', followSymlinks: false
                }
            }
        }

    }
}

参考阳明大佬清理镜像的 pipeline 脚本进行了批改,减少了 scan Image 的 stage! 其实都是参照 harbor 的 api 文档来的,更为具体的能够参考 harbor 的官网 api。

触发 jenkins 构建

spinnaker-nginx-demo pipeline 是 gitlab 触发的,更新 gitlab 仓库中轻易一个 master 分支的文件触发 jenkins 构建:

登陆 harbor 仓库验证:



ok 验证胜利,当然了如果有其余的需要的能够参见 harbor 的 api 文档,当然了前提是 harbor 反对的性能 ……,不能 jenkins 中集成扫描报告,让我放弃了 harbor 中的 Trivy,当然了也有可能是我对 Trivy 不熟,没有去深刻看一遍 Trivy 的文档,只是看了 harbor 的 api…….

anchore-engine

anchore-engine helm 的装置

anchore-engine,是无意间搜 jenkins scan image 这些关键词的时候网上看到的:https://cloud.tencent.com/developer/article/1666535. 很不错的文章,而后看了一眼官网,有 helm 的装置形式:https://engine.anchore.io/docs/install/helm/, 装置一下测试一下

[root@k8s-master-01 anchore-engine]# helm repo add anchore https://charts.anchore.io
[root@k8s-master-01 anchore-engine]# helm repo list


注:哈哈哈 之前我搞过一遍啊申明一下 所以 helm 仓库之前加过 嗯也装置过 1.14.6 的版本。然而没有胜利跟 jenkins 整合胜利就想抱着试试最新版本的想法 …..。然而事实貌似战胜了我,预计是 jenkins 的插件太老了?(上面一步步验证,是本人没有深入研究 ……. 其实是能够的)顺便温习一遍 helm 命令!

[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm repo update
[root@k8s-master-01 anchore-engine]# helm search repo anchore/anchore-engine
[root@k8s-master-01 anchore-engine]# helm fetch anchore/anchore-engine

[root@k8s-master-01 anchore-engine]#  ls
[root@k8s-master-01 anchore-engine]#  tar zxvf anchore-engine-1.15.1.tgz
[root@k8s-master-01 anchore-engine]#  cd anchore-engine


vim values.yaml

就批改了一下存储大小设置了一下明码跟邮箱!

helm install anchore-engine -f values.yaml  . -n anchore-engine


发现一个坑爹的 ……. 为什么kubernetes 的 domain 都默认设置的 cluster.local?看了一遍配置文件也没有找到批改的 …….

jenkins 的配置

jenkins 首先要装置插件

配置:
系统管理 - 系统配置:

构建流水线:

因为这里是测试就先搞了一下应用 Anchore Enine 来欠缺 DevSecOps 工具链外面的 demo(批改了一下构建节点,github 仓库还有 dockerhub 仓库秘钥):

jenkins 新建 pipeline 工作 anchore-enchore

pipeline {agent { node { label "build01"}}    
    environment {
        registry = "duiniwukenaihe/spinnaker-cd"  // 仓库地址,用于把镜像 push 到镜像仓库。依照理论状况批改
        registryCredential = 'duiniwukenaihe'  // 用于登陆镜像仓库的凭证,依照理论状况批改
    }
    stages {
        //jenkins 从代码仓库里下载代码
        stage('Cloning Git') {
            steps {git 'https://github.com.cnpmjs.org/duiniwukenaihe/docker-dvwa.git'}
        }
        // 构建镜像
        stage('Build Image') {
            steps {
                script {app = docker.build(registry+ ":$BUILD_NUMBER")
                }
            }
        }
        // 把镜像推送到仓库
        stage('Push Image') {
            steps {
                script {docker.withRegistry('', registryCredential) {app.push()
                    }
                }
            }
        }
        // 镜像扫描
        stage('Container Security Scan') {
            steps {sh 'echo"'+registry+':$BUILD_NUMBER `pwd`/Dockerfile" > anchore_images'anchore engineRetries:"240", name:'anchore_images'}
        }
        stage('Cleanup') {
             steps {sh script: "docker rmi" + registry+ ":$BUILD_NUMBER"}
        }
    }
}

注:github.com 批改为 github.com.cnpmjs.org 就是为了减速 …. 毕竟墙裂无奈 pull 的动代码

运行 pipeline 工作

反正就是搞了好几次都是失败告终 ….,不求甚解,缓缓剥离找到问题 ing…….

docker-compose 装置 anchore-engine

依照教程应用 Anchore Enine 来欠缺 DevSecOps 工具链
搞了一个 docker-compose 的部署形式:
注:我的集群默认 cri 是 containerd,k8s-node-06 节点是 docker 做运行时,且不参加调度,anchore-engine 就筹备在这台服务器下面装置了!内网 ip:10.0.4.18。

前提装置 docker-compose:

docker-compose up -d

间接应用了默认的 yaml 文件并没有进行额定批改,比拟后期只是测试。

# curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
# docker-compose up -d
# This is a docker-compose file for development purposes. It refereneces unstable developer builds from the HEAD of master branch in https://github.com/anchore/anchore-engine
# For a compose file intended for use with a released version, see https://engine.anchore.io/docs/quickstart/
#
---
version: '2.1'
volumes:
  anchore-db-volume:
    # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create anchore-db-volume"
    external: false

services:
  # The primary API endpoint service
  api:
    image: anchore/anchore-engine:v1.0.0
    depends_on:
      - db
      - catalog
    ports:
      - "8228:8228"
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
      - ANCHORE_ENDPOINT_HOSTNAME=api
      - ANCHORE_ADMIN_PASSWORD=foobar
      - ANCHORE_DB_HOST=db
      - ANCHORE_DB_PASSWORD=mysecretpassword
    command: ["anchore-manager", "service", "start", "apiext"]

  # Catalog is the primary persistence and state manager of the system
  catalog:
    image: anchore/anchore-engine:v1.0.0
    depends_on:
      - db
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    expose:
      - 8228
    environment:
      - ANCHORE_ENDPOINT_HOSTNAME=catalog
      - ANCHORE_ADMIN_PASSWORD=foobar
      - ANCHORE_DB_HOST=db
      - ANCHORE_DB_PASSWORD=mysecretpassword
    command: ["anchore-manager", "service", "start", "catalog"]
  queue:
    image: anchore/anchore-engine:v1.0.0
    depends_on:
      - db
      - catalog
    expose:
      - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
      - ANCHORE_ENDPOINT_HOSTNAME=queue
      - ANCHORE_ADMIN_PASSWORD=foobar
      - ANCHORE_DB_HOST=db
      - ANCHORE_DB_PASSWORD=mysecretpassword
    command: ["anchore-manager", "service", "start", "simplequeue"]
  policy-engine:
    image: anchore/anchore-engine:v1.0.0
    depends_on:
      - db
      - catalog
    expose:
      - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
      - ANCHORE_ENDPOINT_HOSTNAME=policy-engine
      - ANCHORE_ADMIN_PASSWORD=foobar
      - ANCHORE_DB_HOST=db
      - ANCHORE_DB_PASSWORD=mysecretpassword
      - ANCHORE_VULNERABILITIES_PROVIDER=grype
    command: ["anchore-manager", "service", "start", "policy_engine"]
  analyzer:
    image: anchore/anchore-engine:v1.0.0
    depends_on:
      - db
      - catalog
    expose:
      - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
      - ANCHORE_ENDPOINT_HOSTNAME=analyzer
      - ANCHORE_ADMIN_PASSWORD=foobar
      - ANCHORE_DB_HOST=db
      - ANCHORE_DB_PASSWORD=mysecretpassword
    volumes:
      - /analysis_scratch
    command: ["anchore-manager", "service", "start", "analyzer"]
  db:
    image: "postgres:9"
    volumes:
      - anchore-db-volume:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=mysecretpassword
    expose:
      - 5432
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
#  # Uncomment this section to add a prometheus instance to gather metrics. This is mostly for quickstart to demonstrate prometheus metrics exported
#  prometheus:
#    image: docker.io/prom/prometheus:latest
#    depends_on:
#      - api
#    volumes:
#      - ./anchore-prometheus.yml:/etc/prometheus/prometheus.yml:z
#    logging:
#      driver: "json-file"
#      options:
#        max-size: 100m
#    ports:
#      - "9090:9090"
#
#  # Uncomment this section to run a swagger UI service, for inspecting and interacting with the anchore engine API via a browser (http://localhost:8080 by default, change if needed in both sections below)
#  swagger-ui-nginx:
#    image: docker.io/nginx:latest
#    depends_on:
#      - api
#      - swagger-ui
#    ports:
#      - "8080:8080"
#    volumes:
#      - ./anchore-swaggerui-nginx.conf:/etc/nginx/nginx.conf:z
#    logging:
#      driver: "json-file"
#      options:
#        max-size: 100m
#  swagger-ui:
#    image: docker.io/swaggerapi/swagger-ui
#    environment:
#      - URL=http://localhost:8080/v1/swagger.json
#    logging:
#      driver: "json-file"
#      options:
#        max-size: 100m
#

[root@k8s-node-06 anchore]# docker-compose ps
         Name                        Command                  State               Ports         
------------------------------------------------------------------------------------------------
anchore_analyzer_1        /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchore_api_1             /docker-entrypoint.sh anch ...   Up (healthy)   0.0.0.0:8228->8228/tcp
anchore_catalog_1         /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchore_db_1              docker-entrypoint.sh postgres    Up (healthy)   5432/tcp              
anchore_policy-engine_1   /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp              
anchore_queue_1           /docker-entrypoint.sh anch ...   Up (healthy)   8228/tcp

批改 jenkins 配置

Pipeline Test:

谬误的猜想:

helm 部署不能够初步预计是我的 containerd 不是 docker 的起因? 又或者是服务端版本太高?

随之而来的问题:

如何扫描公有仓库镜像?

然而随之问题又来了:anchore-enchorepipeline 中镜像仓库默认的是 dockerhub, 我的仓库是公有 harbor 仓库,spinnaker-nginx-demo 的利用 pipeline 减少扫描都跑不起来 …….

    //Docker 镜像仓库信息
registryServer = "harbor.xxxx.com"
projectName = "${JOB_NAME}".split('-')[0]
repoName = "${JOB_NAME}"
imageName = "${registryServer}/${projectName}/${repoName}"
//pipeline
pipeline{agent { node { label "build01"}}


  // 设置构建触发器
    triggers {
          GenericTrigger( causeString: 'Generic Cause', 
                        genericVariables: [[defaultValue: '', key:'branchName', regexpFilter:'', value: '$.ref']],         
                        printContributedVariables: true, 
                        printPostContent: true, 
                        regexpFilterExpression: '', 
                        regexpFilterText: '', 
                        silentResponse: true, 
                        token: 'spinnaker-nginx-demo')
    }


    stages{stage("CheckOut"){
            steps{
                script{
                              srcUrl = "https://gitlab.xxxx.com/zhangpeng/spinnaker-nginx-demo.git"
                              branchName = branchName - "refs/heads/"
                              currentBuild.description = "Trigger by ${branchName}"
                    println("${branchName}")
                    checkout([$class: 'GitSCM', 
                              branches: [[name: "${branchName}"]], 
                              doGenerateSubmoduleConfigurations: false, 
                              extensions: [], 
                              submoduleCfg: [], 
                              userRemoteConfigs: [[credentialsId: 'gitlab-admin-user',
                                                   url: "${srcUrl}"]]])
                }
            }
        }

        stage("Push Image"){
            steps{
                script{withCredentials([usernamePassword(credentialsId: 'harbor-admin-user', passwordVariable: 'password', usernameVariable: 'username')]) {sh """sed -i --"s/VER/${branchName}/g" app/index.html
                           docker login -u ${username} -p ${password} ${registryServer}
                           docker build -t ${imageName}:${data}  .
                           docker push ${imageName}:${data}
                           docker rmi ${imageName}:${data}

                        """
                    }
                }
            }
        }
      
        stage('Container Security Scan') {
            steps {
                script{sh """echo" 开始扫描 "echo"${imageName}:${data} ${WORKSPACE}/Dockerfile"> anchore_images"""
                    anchore engineRetries: "360",forceAnalyze: true, name: 'anchore_images'
            }
        }  
        }      
        stage("Trigger File"){
            steps {
                script{
                    sh """
                        echo IMAGE=${imageName}:${data} >trigger.properties
                        echo ACTION=DEPLOY >> trigger.properties
                        cat trigger.properties
                    """archiveArtifacts allowEmptyArchive: true, artifacts:'trigger.properties', followSymlinks: false
                }
            }
        }

    }
}

github issuse 找到的灵感:

怎么回事?功夫不负有心人看了一遍 github anchore 仓库的 issue:https://github.com/anchore/anchore-engine/issues/438,找到了解决办法 ……

减少公有仓库配置

[root@k8s-node-06 anchore]# docker exec -it d21c8ed1064d bash
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxx
[anchore@d21c8ed1064d anchore-engine]$ anchore-cli --url http://10.0.4.18:8228/v1/ --u admin --p foobar --debug image add harbor.layame.com/spinnaker/spinnaker-nginx-demo:202111192008


嗯 我把我 harbor 的仓库加上了看一眼我 运行一下我的 jenkins 貌似我的流水线就都能够出报告了



登陆 anchore_api_1 容器验证:

[anchore@d21c8ed1064d anchore-engine]$ anchore-cli image list

顺便批改一下 helm 搭建的 anchore-enchore

同理 我当初狐疑我的 helm 部署的 harbor 也是这谬误 … 开始狐疑错了,批改一下试试!

[root@k8s-master-01 anchore-engine]# kubectl get pods -n anchore-engine
NAME                                                        READY   STATUS    RESTARTS   AGE
anchore-engine-anchore-engine-analyzer-fcf9ffcc8-dv955      1/1     Running   0          10h
anchore-engine-anchore-engine-api-7f98dc568-j6tsz           1/1     Running   0          10h
anchore-engine-anchore-engine-catalog-754b996b75-q5hqg      1/1     Running   0          10h
anchore-engine-anchore-engine-policy-745b6778f7-hbsvx       1/1     Running   0          10h
anchore-engine-anchore-engine-simplequeue-695df4498-wgss4   1/1     Running   0          10h
anchore-engine-postgresql-9cdbb5f7f-4dcnk                   1/1     Running   0          10h
[root@k8s-master-01 anchore-engine]# kubectl get svc -n anchore-engine
NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
anchore-engine-anchore-engine-api           ClusterIP   172.19.255.231   <none>        8228/TCP   10h
anchore-engine-anchore-engine-catalog       ClusterIP   172.19.254.163   <none>        8082/TCP   10h
anchore-engine-anchore-engine-policy        ClusterIP   172.19.254.91    <none>        8087/TCP   10h
anchore-engine-anchore-engine-simplequeue   ClusterIP   172.19.253.141   <none>        8083/TCP   10h
anchore-engine-postgresql                   ClusterIP   172.19.252.126   <none>        5432/TCP   10h
[root@k8s-master-01 anchore-engine]# kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli  --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=xxxxxx --env ANCHORE_CLI_URL=http://172.19.255.231:8228/v1
[anchore@anchore-cli anchore-cli]$ anchore-cli registry add harbor.xxxx.com zhangpeng xxxxxxxx
[anchore@anchore-cli anchore-cli]$ anchore-cli --url http://172.19.255.231:8228/v1/ --u admin --p xxxx --debug image add harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111192008

貌似也胜利了!颠覆一下我的运行时的假如 or 版本的问题

从新批改 jenkins 的配置为 helm 搭建 anchore-engine 的 api 地址, 因为 cluter.local 的梗我很不喜爱间接应用了集群内 service 的地址:

运行 jenkins 工作 spinnaker-nginx-demo pipeline

仍然是批改 gitlab 文件触发 pipeline 工作,很是遗憾,高危破绽检测未能通过 FAIL, 哈哈哈哈 然而流水线总算是跑通了:

比拟一下 Trivy 与 anchore-engine

拿 spinnaker-nginx-demo 107 制品镜像来比照, 制品标签为 harbor.xxxx.com/spinnaker/spinnaker-nginx-demo:202111201116:
anchore-engine 报告:

what fack 为什么 harbor 的 trivy 扫描是没有破绽?

霎时陷入了要解决破绽的强迫症轮回中 …….

总结一下:

  1. harbor 自定镜像扫描插件 tivy,嗯也能够抉择 clair 貌似也能够与 anchore-engine 买通
  2. anchore-engine 要 add 加入公有仓库,heml 装置拼装地址记得批改 cluster.local,如果是自定义集群
  3. anchore-engine 比 trivy 的扫描更为严格
  4. 要长于应用 –help 命令:anchore-cli –help

正文完
 0