본문 바로가기
DevOps/EFK stack

Kubernetes + EFK stack 배포하기

by 푸푸망나뇽 2021. 4. 29.
반응형

EFK소개글에 이어 Kubernetes환경에 EFK stack을 구축 및 배포를 진행해보겠다.

 

1.네임스페이스 생성

kind: Namespace
apiVersion: v1
metadata:
  name: kube-log     

 

2. Elasticsearch 배포

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: kube-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      #volumes:
      #  - name: data
      #    persistentVolumeClaim:
      #      claimName: data
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
        resources:
            limits:
              cpu: 2000m
              memory: 16Gi
            requests:
              cpu: 1000m
              memory: 16Gi
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.seed_hosts
            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
          - name: cluster.initial_master_nodes
            value: "es-cluster-0,es-cluster-1,es-cluster-2"
          - name: ES_JAVA_OPTS
            value: "-Xms8g -Xmx8g"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        app: elasticsearch
    spec:
      accessModes: [ "ReadWriteMany" ]
      #storageClassName: do-block-storage
      resources:
        requests:
          storage: 100Gi
---
#apiVersion: v1
#kind: PersistentVolumeClaim
#metadata:
#  name: data
#  namespace: kube-logging
#  labels:
#    app: elasticsearch
#spec:
#  accessModes:
#    - ReadWriteOnce
#  resources:
#    requests:
#      storage: 100Gi

---
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: kube-logging
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node       

 

elasticsearch 배포시 elasticsearch 설정 필요

  • discovery.seed_hosts : elasticsearch 클러스터를 구성하는 노드들의 호스트들

      ex) "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"

 

  • cluster.initial_master_nodes: 엘라스틱서치를 클러스터 구성하는 노드 이름들

      ex) "es-cluster-0,es-cluster-1,es-cluster-2"

 

  • ES_JAVA_OPTS: 엘라스틱서치 실행시 JVM size ( pod에 할당된 size의 최대 절반까지 가능)

      ex) "-Xms8g -Xmx8g"

 

 

 

3. Kibana 배포

apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  selector:
    app: kibana
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: kube-logging
  labels:
    app: kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.9.3
        resources:
          limits:
            cpu: 2000m
          requests:
            cpu: 1000m
        env:
          #- name: SERVER_NAME
          #  value: "kibana.kubenetes.example.com"
          - name: SERVER_REWRITEBASEPATH
            value: "true"
          - name: SERVER_BASEPATH
            value: "/log"
          - name: ELASTICSEARCH_URL
            value: "http://elasticsearch.kube-logging.svc.cluster.local:9200"
          - name: ELASTICSEARCH_HOSTS
            value: "http://elasticsearch.kube-logging.svc.cluster.local:9200"
        ports:
        - containerPort: 5601

 

Kibana 배포시 설정 필요

  • ELASTICSEARCH_URL: kibana와 연결할 elasticsearch url

      ex) "http://elasticsearch.kube-logging.svc.cluster.local:9200", "http://localhost:9200"

 

  • ELASTICSEARCH_HOSTS: kibana와 연결할 elasticsearch url (버전 별로 ELASTICSEARCH_URL 대신 사용하는 경우가 있어 둘다 명시 필요) 

      ex) "http://elasticsearch.kube-logging.svc.cluster.local:9200"

 

 

4. Fluentd 배포

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-log
  labels:
    app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd
  labels:
    app: fluentd
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-log
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-log
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd
        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            value: "elasticsearch.kube-log.svc.cluster.local"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENTD_SYSTEMD_CONF
            value: disable
        resources:
          limits:
            memory: 512Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

Fluentd 배포시 설정 필요

  • FLUENT_ELASTICSEARCH_HOST: fluentd와 연결할 elasticsearch url

      ex) "http://elasticsearch.kube-logging.svc.cluster.local", "http://localhost"

 

  • FLUENT_ELASTICSEARCH_PORT: fluentd와 연결할 elasticsearch port번호

      ex) "9200"

 

 

5. VirtualService 생성

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: kibana
  namespace: kube-log
spec:
  gateways:
  - kubeflow/kubeflow-gateway
  hosts:
  - '*'
  http:
  - match:
    - uri:
        prefix: /log/
   # rewrite:
   #   uri:
    route:
    - destination:
        host: kibana.kube-log.svc.cluster.local
        port:
          number: 5601

 

6. Trouble Shooting

 

Elasticsearch log

 [gc][848] overhead, spent [408ms] collecting in the last [1.1s]

→ JVM heap size 설정 및 메모리 할당 필요

→ JVM size의 최소 두배이상의 메모리를 할당해주어야한다.

  env:
          - name: ES_JAVA_OPTS
            value: -Xms2g -Xmx2g
          resources:
            requests:
              memory: 4Gi
            limits:
              memory: 4Gi

 

Kibana Log

 ElasticSearch All Shards Failed

 

elasticsearch cluster에 문제가 발생한것.

참고: https://joypinkgom.tistory.com/228

 

  1. elasticsearch 클러스터 상태 확인
    curl http://elasticsearch.kube-logging.svc.cluster.local:9200/_cluster/health?pretty 

  2. 문제를 야기하는 elasticsearch index 확인 및 삭제
    curl http://elasticsearch.kube-logging.svc.cluster.local:9200/_cluster/health/?level=shards
    curl -XDELETE http://elasticsearch.kube-logging.svc.cluster.local:9200/인덱스명
반응형

'DevOps > EFK stack' 카테고리의 다른 글

Elasticsearch (Aggregation)  (0) 2021.06.30
MSA  (0) 2021.04.06
Fluentd & Kibana  (0) 2021.04.03
Elasticesearch 란? (DSL query)  (0) 2021.04.03

댓글