前言:

kubernetes1.16版本的时候装置了elastic on kubernetes(ECK)1.0版本。存储用了local disk文档跑了一年多了。elasticsearch对应版本是7.6.2。当初已实现了kubernetes 1.20.5 containerd cilium hubble 环境的搭建(https://blog.csdn.net/saynaihe/article/details/115187298)并且集成了cbs腾讯云块存储(https://blog.csdn.net/saynaihe/article/details/115212770)。eck也更新到了1.5版本(我能说我前天装置的时候还是1.4.0吗.....还好我只是简略利用没有太简单的变动无非版本变了....那就再来一遍吧)
最早部署的kubernetes1.16版本的eck装置形式https://duiniwukenaihe.github.io/2019/10/21/k8s-efk/多年前搭建的eck1.0版本。

对于eck

elastic cloud on kubernetes是一种operator的装置形式,很大水平上简化了利用的部署。同样的还有promtheus-operator。
可参照https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html官网文档的部署形式。

1. 在kubernete集群中部署ECK

1. 装置自定义资源定义和操作员及其RBAC规定:

kubectl apply -f https://download.elastic.co/d...

2. 监督操作日志:

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
**
-----------------------分隔符-----------------
我是间接把yaml下载到本地了。

###至于all-in-one.yaml.1前面的1能够疏忽了哈哈,第二次加载文件加后缀了。kubectl apply -f all-in-one.yamlkubectl -n elastic-system logs -f statefulset.apps/elastic-operator

2. 部署elasticsearch集群

1. 定制化elasticsearch 镜像

减少s3插件,批改时区东八区,并增加腾讯云cos的秘钥,并从新打包elasticsearch镜像.

1. DockerFile如下

FROM docker.elastic.co/elasticsearch/elasticsearch:7.12.0ARG ACCESS_KEY=XXXXXXXXXARG SECRET_KEY=XXXXXXXARG ENDPOINT=cos.ap-shanghai.myqcloud.comARG ES_VERSION=7.12.0ARG PACKAGES="net-tools lsof"ENV allow_insecure_settings 'true'RUN rm -rf /etc/localtime && cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeRUN echo 'Asia/Shanghai' > /etc/timezoneRUN  if [ -n "${PACKAGES}" ]; then  yum install -y $PACKAGES && yum clean all && rm -rf /var/cache/yum; fiRUN \     /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3 && \    /usr/share/elasticsearch/bin/elasticsearch-keystore create && \    echo "XXXXXX"  | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.access_key  && \    echo "XXXXXX"  | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.secret_key

2. 打包镜像并上传到腾讯云镜像仓库,也能够是其余公有仓库

 docker build -t ccr.ccs.tencentyun.com/xxxx/elasticsearch:7.12.0 . docker push  ccr.ccs.tencentyun.com/xxxx/elasticsearch:7.12.0

2. 创立elasticsearch部署yaml文件,部署elasticsearch集群

批改了本人打包 image tag ,应用了腾讯云cbs csi块存储。并定义了部署的namespace,创立了namespace logging.

1. 创立部署elasticsearch利用的命名空间

kubectl create ns logging
cat <<EOF > elastic.yamlapiVersion: elasticsearch.k8s.elastic.co/v1kind: Elasticsearchmetadata:  name: elastic  namespace: loggingspec:  version: 7.12.0  image: ccr.ccs.tencentyun.com/XXXX/elasticsearch:7.12.0  http:    tls:      selfSignedCertificate:        disabled: true  nodeSets:  - name: laya    count: 3    podTemplate:      spec:        containers:        - name: elasticsearch          env:          - name: ES_JAVA_OPTS            value: -Xms2g -Xmx2g          resources:            requests:              memory: 4Gi              cpu: 0.5            limits:              memory: 4Gi              cpu: 2    config:      node.master: true      node.data: true      node.ingest: true      node.store.allow_mmap: false    volumeClaimTemplates:    - metadata:        name: elasticsearch-data      spec:        accessModes:        - ReadWriteOnce        storageClassName: cbs-csi         resources:          requests:            storage: 200GiEOF          

2. 部署yaml文件并查看利用部署状态

kubectl apply -f elastic.yamlkubectl get elasticsearch -n loggingkubectl get elasticsearch -n loggingkubectl -n logging get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=elastic'

3. 获取elasticsearch凭据

kubectl -n logging get secret elastic-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

4. 间接装置kibana了

批改了时区,和elasticsearch镜像一样都批改到了东八区,并将语言设置成了中文,对于selfSignedCertificate起因参照https://www.elastic.co/guide/en/cloud-on-k8s/1.4/k8s-kibana-http-configuration.html。

cat <<EOF > kibana.yamlapiVersion: kibana.k8s.elastic.co/v1kind: Kibanametadata:  name: elastic  namespace: loggingspec:  version: 7.12.0  image: docker.elastic.co/kibana/kibana:7.12.0  count: 1  elasticsearchRef:    name: elastic  http:    tls:      selfSignedCertificate:        disabled: true  podTemplate:    spec:      containers:      - name: kibana        env:        - name: I18N_LOCALE          value: zh-CN        resources:          requests:            memory: 1Gi          limits:            memory: 2Gi        volumeMounts:        - name: timezone-volume          mountPath: /etc/localtime          readOnly: true      volumes:      - name: timezone-volume        hostPath:          path: /usr/share/zoneinfo/Asia/ShanghaiEOF          
kubectl apply kibana.yaml

5. 对外映射kibana 服务

对外裸露都是用traefik https 代理,命名空间增加tls secret。绑定外部kibana service.而后内部slb udp代理443端口。然而当初腾讯云slb能够挂载多个证书了,就把这层剥离了,间接http形式到80端口 。而后https 证书 都在slb负载平衡代理了。这样省心了证书的治理,还有一点是能够在slb层间接收集接入层日志到cos。并可应用腾讯云自有的日志服务。

cat <<EOF > ingress.yamlapiVersion: traefik.containo.us/v1alpha1kind: IngressRoutemetadata:  name: kibana-kb-http  namespace: loggingspec:  entryPoints:    - web  routes:    - match: Host(`kibana.XXXXX.com`)      kind: Rule      services:        - name: elastic-kb-http          port: 5601EOF          kubectl apply -f ingress.yaml

输出 用户名elastic 明码为下面获取的elasticsearch的凭据,进入治理页面。新界面很是酷炫

6. now 要增加快照仓库了


创立快照仓库跟S3形式是一样的,具体的能够参考https://blog.csdn.net/ypc123ypc/article/details/87860583这篇博文

PUT _snapshot/esbackup{  "type": "s3",  "settings": {    "endpoint":"cos.ap-shanghai.myqcloud.com",    "region": "ap-shanghai",    "compress" : "true",    "bucket": "elastic-XXXXXXX"  }}


OK 进行验证快照仓库是否增加胜利


还原一个试试?



可还行?期待变绿

根本实现。失常能够应用了。应用过程中还有很多留神的。要害还是集群的设计规划。数据的预估增长还有报警。下次有工夫列一下Elastalert在kubernetes中的部署利用。