共计 26896 个字符,预计需要花费 68 分钟才能阅读完成。
前言
本文介绍如何在 k8s 集群中应用 helm 来创立logstash
,供大家参考学习。
筹备
- 阿里云 K8S 集群
- 装置 helm
装置 Kafka
咱们首先增加一下 helm 库,并且搜寻到logstash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm search repo logstash
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/logstash 5.4.1 8.7.1 Logstash is an open source data processing engi...
stable/dmarc2logstash 1.3.1 1.0.3 DEPRECATED Provides a POP3-polled DMARC XML rep...
stable/logstash 2.4.3 7.1.1 DEPRECATED - Logstash is an open source, server...
bitnami/dataplatform-bp2 12.0.5 1.0.1 DEPRECATED This Helm chart can be used for the ...
咱们装置的 Chart 版本:5.4.1,App 版本:8.7.1,接着咱们把源码 pull 下来,如下:
$ helm pull bitnami/logstash
解压一下下载后的 logstash-5.4.1.tgz 文件,如下所示:
之后咱们关上 values.yaml 文件,如下所示:
## @section Global parameters
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
## @param global.imageRegistry Global Docker image registry
## @param global.imagePullSecrets Global Docker registry secret names as an array
## @param global.storageClass Global StorageClass for Persistent Volume(s)
##
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: ""
## @section Common parameters
## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set)
##
kubeVersion: ""
## @param nameOverride String to partially override logstash.fullname template (will maintain the release name)
##
nameOverride: ""
## @param fullnameOverride String to fully override logstash.fullname template
##
fullnameOverride: ""
## @param clusterDomain Default Kubernetes cluster domain
##
clusterDomain: cluster.local
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param extraDeploy Array of extra objects to deploy with the release (evaluated as a template).
##
extraDeploy: []
## Enable diagnostic mode in the deployment
##
diagnosticMode:
## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden)
##
enabled: false
## @param diagnosticMode.command Command to override all containers in the deployment
##
command:
- sleep
## @param diagnosticMode.args Args to override all containers in the deployment
##
args:
- infinity
## @section Logstash parameters
## Bitnami Logstash image
## ref: https://hub.docker.com/r/bitnami/logstash/tags/
## @param image.registry Logstash image registry
## @param image.repository Logstash image repository
## @param image.tag Logstash image tag (immutable tags are recommended)
## @param image.digest Logstash image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param image.pullPolicy Logstash image pull policy
## @param image.pullSecrets Specify docker-registry secret names as an array
## @param image.debug Specify if debug logs should be enabled
##
image:
registry: docker.io
repository: bitnami/logstash
tag: 8.6.0-debian-11-r0
digest: ""## Specify a imagePullPolicy. Defaults to'Always'if image tag is'latest', else set to'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Set to true if you would like to see extra information on logs
##
debug: false
## @param hostAliases Add deployment host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param configFileName Logstash configuration file name. It must match the name of the configuration file mounted as a configmap.
##
configFileName: logstash.conf
## @param enableMonitoringAPI Whether to enable the Logstash Monitoring API or not Kubernetes cluster domain
##
enableMonitoringAPI: true
## @param monitoringAPIPort Logstash Monitoring API Port
##
monitoringAPIPort: 9600
## @param extraEnvVars Array containing extra env vars to configure Logstash
## For example:
## extraEnvVars:
## - name: ELASTICSEARCH_HOST
## value: "x.y.z"
##
extraEnvVars: []
## @param extraEnvVarsSecret To add secrets to environment
##
extraEnvVarsSecret: ""
## @param extraEnvVarsCM To add configmaps to environment
##
extraEnvVarsCM: ""
## @param input [string] Input Plugins configuration
## ref: https://www.elastic.co/guide/en/logstash/current/input-plugins.html
##
input: |-
# udp {
# port => 1514
# type => syslog
# }
# tcp {
# port => 1514
# type => syslog
# }
http {port => 8080}
## @param filter Filter Plugins configuration
## ref: https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
## e.g:
## filter: |-
## grok {## match => { "message" => "%{COMBINEDAPACHELOG}" }
## }
## date {## match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z"]
## }
##
filter: ""
## @param output [string] Output Plugins configuration
## ref: https://www.elastic.co/guide/en/logstash/current/output-plugins.html
##
output: |-
# elasticsearch {# hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
# manage_template => false
# index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
# }
# gelf {# host => "${GRAYLOG_HOST}"
# port => ${GRAYLOG_PORT}
# }
stdout {}
## @param existingConfiguration Name of existing ConfigMap object with the Logstash configuration (`input`, `filter`, and `output` will be ignored).
##
existingConfiguration: ""
## @param enableMultiplePipelines Allows user to use multiple pipelines
## ref: https://www.elastic.co/guide/en/logstash/master/multiple-pipelines.html
##
enableMultiplePipelines: false
## @param extraVolumes Array to add extra volumes (evaluated as a template)
## extraVolumes:
## - name: myvolume
## configMap:
## name: myconfigmap
##
extraVolumes: []
## @param extraVolumeMounts Array to add extra mounts (normally used with extraVolumes, evaluated as a template)
## extraVolumeMounts:
## - mountPath: /opt/bitnami/desired-path
## name: myvolume
## readOnly: true
##
extraVolumeMounts: []
## ServiceAccount for Logstash
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## @param serviceAccount.create Enable creation of ServiceAccount for Logstash pods
##
create: true
## @param serviceAccount.name The name of the service account to use. If not set and `create` is `true`, a name is generated
## If not set and create is true, a name is generated using the logstash.serviceAccountName template
##
name: ""
## @param serviceAccount.automountServiceAccountToken Allows automount of ServiceAccountToken on the serviceAccount created
## Can be set to false if pods using this serviceAccount do not need to use K8s API
##
automountServiceAccountToken: true
## @param serviceAccount.annotations Additional custom annotations for the ServiceAccount
##
annotations: {}
## @param containerPorts [array] Array containing the ports to open in the Logstash container (evaluated as a template)
##
containerPorts:
- name: http
containerPort: 8080
protocol: TCP
- name: monitoring
containerPort: 9600
protocol: TCP
## - name: syslog-udp
## containerPort: 1514
## protocol: UDP
## - name: syslog-tcp
## containerPort: 1514
## protocol: TCP
##
## @param initContainers Add additional init containers to the Logstash pod(s)
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
## e.g:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## command: ['sh', '-c', 'echo"hello world"']
##
initContainers: []
## @param sidecars Add additional sidecar containers to the Logstash pod(s)
## e.g:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: []
## @param replicaCount Number of Logstash replicas to deploy
##
replicaCount: 1
## @param updateStrategy.type Update strategy type (`RollingUpdate`, or `OnDelete`)
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
##
updateStrategy:
type: RollingUpdate
## @param podManagementPolicy Pod management policy
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
##
podManagementPolicy: OrderedReady
## @param podAnnotations Pod annotations
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podLabels Extra labels for Logstash pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
##
type: ""
## @param nodeAffinityPreset.key Node label key to match. Ignored if `affinity` is set.
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set.
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param affinity Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param tolerations Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param priorityClassName Pod priority
## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""
## @param schedulerName Name of the k8s scheduler (other than default)
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param terminationGracePeriodSeconds In seconds, time the given to the Logstash pod needs to terminate gracefully
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
terminationGracePeriodSeconds: ""
## @param topologySpreadConstraints Topology Spread Constraints for pod assignment
## https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
## The value is evaluated as a template
##
topologySpreadConstraints: []
## K8s Security Context for Logstash pods
## Configure Pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enabled Logstash pods' Security Context
## @param podSecurityContext.fsGroup Set Logstash pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Configure Container Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param containerSecurityContext.enabled Enabled Logstash containers' Security Context
## @param containerSecurityContext.runAsUser Set Logstash containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set Logstash container's Security Context runAsNonRoot
##
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
## @param command Override default container command (useful when using custom images)
##
command: []
## @param args Override default container args (useful when using custom images)
##
args: []
## @param lifecycleHooks for the Logstash container(s) to automate configuration before or after startup
##
lifecycleHooks: {}
## Logstash containers' resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param resources.limits The resources limits for the Logstash container
## @param resources.requests The requested resources for the Logstash container
##
resources:
## Example:
## limits:
## cpu: 100m
## memory: 128Mi
limits: {}
## Examples:
## requests:
## cpu: 100m
## memory: 128Mi
requests: {}
## Configure extra options for Logstash containers' liveness, readiness and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes
## @param startupProbe.enabled Enable startupProbe
## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe
## @param startupProbe.periodSeconds Period seconds for startupProbe
## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe
## @param startupProbe.failureThreshold Failure threshold for startupProbe
## @param startupProbe.successThreshold Success threshold for startupProbe
##
startupProbe:
enabled: false
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
## @param livenessProbe.enabled Enable livenessProbe
## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe
## @param livenessProbe.periodSeconds Period seconds for livenessProbe
## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe
## @param livenessProbe.failureThreshold Failure threshold for livenessProbe
## @param livenessProbe.successThreshold Success threshold for livenessProbe
##
livenessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
## @param readinessProbe.enabled Enable readinessProbe
## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe
## @param readinessProbe.periodSeconds Period seconds for readinessProbe
## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe
## @param readinessProbe.failureThreshold Failure threshold for readinessProbe
## @param readinessProbe.successThreshold Success threshold for readinessProbe
##
readinessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
## @param customStartupProbe Custom startup probe for the Web component
##
customStartupProbe: {}
## @param customLivenessProbe Custom liveness probe for the Web component
##
customLivenessProbe: {}
## @param customReadinessProbe Custom readiness probe for the Web component
##
customReadinessProbe: {}
## Service parameters
##
service:
## @param service.type Kubernetes service type (`ClusterIP`, `NodePort`, or `LoadBalancer`)
##
type: ClusterIP
## @param service.ports [array] Logstash service ports (evaluated as a template)
##
ports:
- name: http
port: 8080
targetPort: http
protocol: TCP
## - name: syslog-udp
## port: 1514
## targetPort: syslog-udp
## protocol: UDP
## - name: syslog-tcp
## port: 1514
## targetPort: syslog-tcp
## protocol: TCP
##
## @param service.loadBalancerIP loadBalancerIP if service type is `LoadBalancer`
##
loadBalancerIP: ""
## @param service.loadBalancerSourceRanges Addresses that are allowed when service is LoadBalancer
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
## e.g:
## loadBalancerSourceRanges:
## - 10.10.10.0/24
##
loadBalancerSourceRanges: []
## @param service.externalTrafficPolicy External traffic policy, configure to Local to preserve client source IP when using an external loadBalancer
## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
##
externalTrafficPolicy: ""
## @param service.clusterIP Static clusterIP or None for headless services
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
## e.g:
## clusterIP: None
##
clusterIP: ""
## @param service.annotations Annotations for Logstash service
##
annotations: {}
## @param service.sessionAffinity Session Affinity for Kubernetes service, can be "None" or "ClientIP"
## If "ClientIP", consecutive client requests will be directed to the same Pod
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
##
sessionAffinity: None
## @param service.sessionAffinityConfig Additional settings for the sessionAffinity
## sessionAffinityConfig:
## clientIP:
## timeoutSeconds: 300
##
sessionAffinityConfig: {}
## Persistence parameters
##
persistence:
## @param persistence.enabled Enable Logstash data persistence using PVC
##
enabled: false
## @param persistence.existingClaim A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
## The value is evaluated as a template
##
existingClaim: ""
## @param persistence.storageClass PVC Storage Class for Logstash data volume
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
storageClass: ""
## @param persistence.accessModes PVC Access Mode for Logstash data volume
##
accessModes:
- ReadWriteOnce
## @param persistence.size PVC Storage Request for Logstash data volume
##
size: 2Gi
## @param persistence.annotations Annotations for the PVC
##
annotations: {}
## @param persistence.mountPath Mount path of the Logstash data volume
##
mountPath: /bitnami/logstash/data
## @param persistence.selector Selector to match an existing Persistent Volume for WordPress data PVC
## If set, the PVC can't have a PV dynamically provisioned for it
## E.g.
## selector:
## matchLabels:
## app: my-app
##
selector: {}
## Init Container parameters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup`
##
enabled: false
## The security context for the volumePermissions init container
## @param volumePermissions.securityContext.runAsUser User ID for the volumePermissions init container
##
securityContext:
runAsUser: 0
## @param volumePermissions.image.registry Init container volume-permissions image registry
## @param volumePermissions.image.repository Init container volume-permissions image repository
## @param volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended)
## @param volumePermissions.image.digest Init container volume-permissions image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag
## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy
## @param volumePermissions.image.pullSecrets Specify docker-registry secret names as an array
##
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r70
digest: ""
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Example:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Init Container resource requests and limits
## ref: https://kubernetes.io/docs/user-guide/compute-resources/
## We usually recommend not to specify default resources and to leave this as a conscious
## choice for the user. This also increases chances charts run on environments with little
## resources, such as Minikube. If you do want to specify resources, uncomment the following
## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
## @param volumePermissions.resources.limits Init container volume-permissions resource limits
## @param volumePermissions.resources.requests Init container volume-permissions resource requests
##
resources:
## Example:
## limits:
## cpu: 100m
## memory: 128Mi
limits: {}
## Examples:
## requests:
## cpu: 100m
## memory: 128Mi
requests: {}
## Configure the ingress resource that allows you to access the
## Logstash installation. Set up the URL
## ref: https://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## @param ingress.enabled Enable ingress controller resource
##
enabled: false
## @param ingress.selfSigned Create a TLS secret for this ingress record using self-signed certificates generated by Helm
##
selfSigned: false
## @param ingress.pathType Ingress Path type
##
pathType: ImplementationSpecific
## @param ingress.apiVersion Override API Version (automatically detected if not set)
##
apiVersion: ""
## @param ingress.hostname Default host for the ingress resource
##
hostname: logstash.local
## @param ingress.path The Path to Logstash. You may need to set this to '/*' in order to use this with ALB ingress controllers.
##
path: /
## @param ingress.annotations Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
## Use this parameter to set the required annotations for cert-manager, see
## ref: https://cert-manager.io/docs/usage/ingress/#supported-annotations
##
## e.g:
## annotations:
## kubernetes.io/ingress.class: nginx
## cert-manager.io/cluster-issuer: cluster-issuer-name
##
annotations: {}
## @param ingress.tls Enable TLS configuration for the hostname defined at ingress.hostname parameter
## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname}}
## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
##
tls: false
## @param ingress.extraHosts The list of additional hostnames to be covered with this ingress record.
## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
## extraHosts:
## - name: logstash.local
## path: /
##
extraHosts: []
## @param ingress.extraPaths Any additional arbitrary paths that may need to be added to the ingress under the main host.
## For example: The ALB ingress controller requires a special rule for handling SSL redirection.
## extraPaths:
## - path: /*
## backend:
## serviceName: ssl-redirect
## servicePort: use-annotation
##
extraPaths: []
## @param ingress.extraRules The list of additional rules to be added to this ingress record. Evaluated as a template
## Useful when looking for additional customization, such as using different backend
##
extraRules: []
## @param ingress.extraTls The tls configuration for additional hostnames to be covered with this ingress record.
## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
## extraTls:
## - hosts:
## - logstash.local
## secretName: logstash.local-tls
##
extraTls: []
## @param ingress.secrets If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
##
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
##
## secrets:
## - name: logstash.local-tls
## key:
## certificate:
##
secrets: []
## @param ingress.ingressClassName IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster .
## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
##
ingressClassName: ""
## Pod disruption budget configuration
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
## @param pdb.create If true, create a pod disruption budget for pods.
## @param pdb.minAvailable Minimum number / percentage of pods that should remain scheduled
## @param pdb.maxUnavailable Maximum number / percentage of pods that may be made unavailable
##
pdb:
create: false
minAvailable: 1
maxUnavailable: ""
- 配置一:抉择 StorageClass
因为咱们应用的是阿里云,所以这里咱们只能应用阿里云存储,另外阿里云存储分为云盘和 NAS、OSS 等,因为云盘最低要 20Gi 咱们用不到这么大的,所以咱们应用 NAS 即可,而 NAS 服务要开明和创立 StorageClass,这里后期我写了一篇如何创立阿里云 NAS 的 StorageClass 文章,参考下面的来操作即可创立名为 alicloud-nas-fs 的 StorageClass,这样零碎就会主动通过 PVC(存储卷申明)绑定择 StorageClass 动态创建出 PV(存储卷)。
persistence:
enabled: true
# NAS
storageClass: alibabacloud-cnfs-nas
size: 2Gi
-
配置二:配置 service
设置 service 成内部也能够拜访的NodePort
模式,而后把官网正文的syslog-udp
和syslog-tcp
全副放开,如下所示:service: type: NodePort ports: - name: http port: 8080 targetPort: http protocol: TCP - name: syslog-udp port: 1514 targetPort: syslog-udp protocol: UDP - name: syslog-tcp port: 1514 targetPort: syslog-tcp protocol: TCP
- 配置三:配置 containerPorts
把 cotainerPorts
正文局部全副放开,如下所示:
containerPorts:
- name: http
containerPort: 8080
protocol: TCP
- name: monitoring
containerPort: 9600
protocol: TCP
- name: syslog-udp
containerPort: 1514
protocol: UDP
- name: syslog-tcp
containerPort: 1514
protocol: TCP
- 配置四:配置 input 和 output
这里要配置 input 中 codec 为 json_lines 格局,并且 type=syslog(这个是自定义标签能够是任何名称,如:abc),配置 output 中 elasticsearch 的连贯信息,如下所示:
input: |-
udp {
port => 1514
type => syslog
codec => json_lines
}
tcp {
port => 1514
type => syslog
codec => json_lines
}
http {port => 8080}
output: |-
if [active] != "" {
elasticsearch {hosts => ["xxx.xxx.xxx.xxx:xxxx"]
index => "%{active}-logs-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {hosts => ["xxx.xxx.xxx.xxx:xxxx"]
index => "ignore-logs-%{+YYYY.MM.dd}"
}
}
stdout {}
配置实现之后,残缺的 values.yaml 文件如下所示:
values.yaml
global:
storageClass: alibabacloud-cnfs-nas
service:
type: NodePort
ports:
- name: http
port: 8080
targetPort: http
protocol: TCP
- name: syslog-udp
port: 1514
targetPort: syslog-udp
protocol: UDP
- name: syslog-tcp
port: 1514
targetPort: syslog-tcp
protocol: TCP
persistence:
enabled: true
# NAS
storageClass: alibabacloud-cnfs-nas
size: 2Gi
containerPorts:
- name: http
containerPort: 8080
protocol: TCP
- name: monitoring
containerPort: 9600
protocol: TCP
- name: syslog-udp
containerPort: 1514
protocol: UDP
- name: syslog-tcp
containerPort: 1514
protocol: TCP
input: |-
udp {
port => 1514
type => syslog
codec => json
}
tcp {
port => 1514
type => syslog
codec => json
}
http {port => 8080}
output: |-
if [active] != "" {
elasticsearch {hosts => ["47.97.208.153:31474"]
index => "%{active}-logs-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {hosts => ["47.97.208.153:31474"]
index => "logs-%{+YYYY.MM.dd}"
}
}
stdout {}
接着,咱们来通过如下 helm 命令来创立logstash
,
$ kubectl create namespace logstash
$ helm install -f values.yaml logstash bitnami/logstash --namespace logstash
创立胜利之后,会有如下输入,
NAME: logstash
LAST DEPLOYED: Wed Sep 27 10:58:35 2023
NAMESPACE: logstash
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: logstash
CHART VERSION: 5.4.1
APP VERSION: 8.7.1
** Please be patient while the chart is being deployed **
Logstash can be accessed through following DNS names from within your cluster:
Logstash: logstash.logstash.svc.cluster.local
To access Logstash from outside the cluster execute the following commands:
export NODE_PORT=$(kubectl get --namespace logstash -o jsonpath="{.spec.ports[0].nodePort}" services logstash)
export NODE_IP=$(kubectl get nodes --namespace logstash -o jsonpath="{.items[0].status.addresses[0].address}")
echo "http://${NODE_IP}:${NODE_PORT}"
这样期待 logstash
的创立了,咱们也能够通过命令查看是否创立胜利:
☁ kubectl get all -n logstash
NAME READY STATUS RESTARTS AGE
pod/logstash-0 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/logstash NodePort 192.168.132.104 <none> 8080:32345/TCP,1514:30227/UDP,1514:30227/TCP 34m
service/logstash-headless ClusterIP None <none> 8080/TCP,1514/UDP,1514/TCP 34m
NAME READY AGE
statefulset.apps/logstash 1/1 34m
获取 ip 和 port
通过上面命令获取:
export NODE_PORT=$(kubectl get --namespace logstash -o jsonpath="{.spec.ports[0].nodePort}" services logstash)
export NODE_IP=$(kubectl get nodes --namespace logstash -o jsonpath="{.items[0].status.addresses[0].address}")
echo "http://${NODE_IP}:${NODE_PORT}"
输入:
http://xxx.xxx.xxx.xxx:xxxx
应用 curl 测试 logstash 是否装置胜利
1、首先把 logstash 下面的 http 端口 8080 映射老本地,如下所示:
kubectl port-forward service/logstash 8080:8080 -nlogstash
2、应用 curl 测试是否胜利,如下所示:
curl -X POST -d '{"message":"Hello World","env":"dev"}' http://localhost:8080
留神:下面的 32345
对应的是 http 协定,而 30227
对应的是 TCP 协定,这个不要弄错了。
留神:如果是本地环境,须要配置brokers: xxx.xxx.xxx.xxx:30227
,如果是线上并且跟 kafka 是同一个 k8s 服务的话,间接应用域名模式拜访,如:brokers: logstash-0.logstash-headless.logstash.svc.cluster.local:30227
删除
1、如果想删除,间接到阿里云的后盾抉择 helm 利用点击删除即可
2、记得还要把 PVC 也要删除,否则始终在记费。
总结
1、应用 helm 装置 logstash 集群十分不便
2、如果想开发本地拜访,能够配置 service 应用 NodePort 类型,如果是生产首选默认的 ClusterIP 模式
3、因为阿里云云盘最低要求 20G,而咱们不须要这么大的,所以应用 NAS 更划算(有钱当我没说)
4、logback 是应用 tcp 连贯的 logstash,测试的时候我应用的是 http 协定,大家不要搞乱了!