Setup EFK Stack on Amazon EKS cluster

About EFK

kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-100129220.ec2.internal Ready <none> 27m v1.24.10-eks-48e63af
ip-10014955.ec2.internal Ready <none> 26m v1.24.10-eks-48e63af
ip-100190100.ec2.internal Ready <none> 30m v1.24.10-eks-48e63af
ip-100226108.ec2.internal Ready <none> 30m v1.24.10-eks-48e63af

Setup EFK Stack

Elasticsearch as a Statefulset

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: es-cluster

spec:

  serviceName: elasticsearch

  replicas: 3

  selector:

    matchLabels:

      app: elasticsearch

  template:

    metadata:

      labels:

        app: elasticsearch

    spec:

      containers:

      - name: elasticsearch

        image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0

        resources:

            limits:

              cpu: 1000m

            requests:

              cpu: 100m

        ports:

        - containerPort: 9200

          name: rest

          protocol: TCP

        - containerPort: 9300

          name: inter-node

          protocol: TCP

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

        env:

          - name: cluster.name

            value: k8s-logs

          - name: node.name

            valueFrom:

              fieldRef:

                fieldPath: metadata.name

          - name: discovery.seed_hosts

            value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"

          - name: cluster.initial_master_nodes

            value: "es-cluster-0,es-cluster-1,es-cluster-2"

          - name: ES_JAVA_OPTS

            value: "-Xms512m -Xmx512m"

      initContainers:

      - name: fix-permissions

        image: busybox

        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]

        securityContext:

          privileged: true

        volumeMounts:

        - name: data

          mountPath: /usr/share/elasticsearch/data

      - name: increase-vm-max-map

        image: busybox

        command: ["sysctl", "-w", "vm.max_map_count=262144"]

        securityContext:

          privileged: true

      - name: increase-fd-ulimit

        image: busybox

        command: ["sh", "-c", "ulimit -n 65536"]

        securityContext:

          privileged: true

  volumeClaimTemplates:

  - metadata:

      name: data

      labels:

        app: elasticsearch

    spec:

      accessModes: [ "ReadWriteOnce" ]

      # storageClassName: ""

      resources:

        requests:

          storage: 3Gi

kubectl create -f es-sts.yaml

 

apiVersion: v1

kind: Service

metadata:

  name: elasticsearch

  labels:

    app: elasticsearch

spec:

  selector:

    app: elasticsearch

  clusterIP: None

  ports:

    - port: 9200

      name: rest

    - port: 9300

      name: inter-node

kubectl create -f es-svc.yaml

 

You can check the PVC status using,

kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-es-cluster-0 Bound pvc-fefd5503–72e9–48ed-8ebb-053c45fe372f 3Gi RWO gp2 24h
data-es-cluster-1 Bound pvc-a3c272a1–7135–40dc-a188–87fdf1804550 3Gi RWO gp2 24h
data-es-cluster-2 Bound pvc-837d2edb-159a-4de1–8d14–5b8fbdb67237 3Gi RWO gp2 24h

Once the Elasticsearch pods come into running status,
kubectl get pods
NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 20h
es-cluster-1 1/1 Running 0 20h
es-cluster-2 1/1 Running 0 20h

{
"cluster_name" : "k8s-logs",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Kibana Deployment & Service

apiVersion: apps/v1

kind: Deployment

metadata:

  name: kibana

  labels:

    app: kibana

spec:

  replicas: 1

  selector:

    matchLabels:

      app: kibana

  template:

    metadata:

      labels:

        app: kibana

    spec:

      containers:

      - name: kibana

        image: docker.elastic.co/kibana/kibana:7.5.0

        resources:

          limits:

            cpu: 1000m

          requests:

            cpu: 100m

        env:

          - name: ELASTICSEARCH_URL

            value: http://elasticsearch:9200

        ports:

        - containerPort: 5601
apiVersion: v1

kind: Service

metadata:

  name: kibana-np

spec:

  selector:

    app: kibana

  type: LoadBalancer

  ports:

    - port: 8080

      targetPort: 5601
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
kibana 1/1 1 1 24h


kubectl get pods
NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 24h
es-cluster-1 1/1 Running 0 24h
es-cluster-2 1/1 Running 0 24h
kibana-6db5f8d7c8-zxjtf 1/1 Running 0 3h30m

Create the kibana-svc now.

kubectl create -f kibana-svc.yaml

Check if the kibana deployment and pod are running using,

Fluentd Daemon set

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: fluentd

  labels:

    app: fluentd

rules:

- apiGroups:

  - ""

  resources:

  - pods

  - namespaces

  verbs:

  - get

  - list

  - watch

Apply the manifest

kubectl create -f fluentd-role.yaml

Next, is the service account fluentd-sa.yaml.

apiVersion: v1

kind: ServiceAccount

metadata:

  name: fluentd

  labels:

    app: fluentd

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: fluentd

roleRef:

  kind: ClusterRole

  name: fluentd

  apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

  name: fluentd

  namespace: default

 

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: fluentd

  labels:

    app: fluentd

spec:

  selector:

    matchLabels:

      app: fluentd

  template:

    metadata:

      labels:

        app: fluentd

    spec:

      serviceAccount: fluentd

      serviceAccountName: fluentd

      containers:

      - name: fluentd

        image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1

        env:

          - name:  FLUENT_ELASTICSEARCH_HOST

            value: "elasticsearch.default.svc.cluster.local"

          - name:  FLUENT_ELASTICSEARCH_PORT

            value: "9200"

          - name: FLUENT_ELASTICSEARCH_SCHEME

            value: "http"

          - name: FLUENTD_SYSTEMD_CONF

            value: disable

        resources:

          limits:

            memory: 512Mi

          requests:

            cpu: 100m

            memory: 200Mi

        volumeMounts:

        - name: varlog

          mountPath: /var/log

        - name: varlibdockercontainers

          mountPath: /var/lib/docker/containers

          readOnly: true

      terminationGracePeriodSeconds: 30

      volumes:

      - name: varlog

        hostPath:

          path: /var/log

      - name: varlibdockercontainers

        hostPath:

          path: /var/lib/docker/containers
kubectl get pods
NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 24h
es-cluster-1 1/1 Running 0 24h
es-cluster-2 1/1 Running 0 24h
fluentd-d49sw 1/1 Running 0 3h30m
fluentd-pkh2l 1/1 Running 0 3h30m
fluentd-qd6f6 1/1 Running 0 3h31m
fluentd-rvdvx 1/1 Running 0 3h30m
kibana-6db5f8d7c8-zxjtf 1/1 Running 0 3h30m

Test Pod

apiVersion: v1

kind: Pod

metadata:

  name: counter

spec:

  containers:

  - name: count

    image: busybox

    args: [/bin/sh, -c,'i=0; while true; do echo "Thanks for visiting devopscube! $i"; i=$((i+1)); sleep 1; done']

Kibana Dashboard

How to Build and Push Spring-boot docker image to AWS ECR and deployed to an ECS container using AWS CodePipeline

In this blog, we describe How to Build and Push the Spring-boot docker image to AWS ECR and deployed it to an ECS container using AWS CodePipeline. We will build a sample spring-boot Application, push the image to AWS ECR and then deploy it to AWS ECS.

Prerequisites:

1. AWS Account

2. GitHub Account

Create an AWS ECR repository

Login to your AWS account and create an Amazon Elastic Container Registry repository with a name.

Create a cluster inside AWS ECS and select the cluster template as “Networking Only” because we use AWS FARGATE here.

Create a Task Definition

Select launch type as FARGATE, select task role as “ecsTaskExecutionRole”, Select your operating system, select Task memory and CPU, and Add container with container name and the image should be the ECR repository URI.

Create a service inside the cluster

Click on Clusters, then select our cluster, create a service with a service name, and select launch type as FARGATE.

Create a “buildspec.yml” file on the project repository

version: 0.2
phases:
pre_build:
commands:
- mvn clean install
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=788155875213.dkr.ecr.us-east-2.amazonaws.com/demospringboot
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
build:
commands:
- echo Build started on `date`
- echo building the Jar file
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"spring-container","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files:
- imagedefinitions.json
- target/spring-boot-ecs.ja

Edit pom.xml file and add the line <finalName>spring-boot-ecs</finalName>

It indicates that the name of the generated artifact will be spring-boot-ecs. This name is often used to create a JAR or WAR file, which can then be deployed to a server or container, such as Amazon Elastic Container Service (ECS), to run a Spring Boot application.

Create Dockerfile on the project repository

FROM openjdk:18-jdk-slim

EXPOSE 8080

ADD target/spring-boot-ecs.jar spring-boot-ecs.jar 

ENTRYPOINT ["java","-jar","/spring-boot-ecs.jar"]

Go to AWS CodePipeline and create a Pipeline

Click on create new pipeline → Enter the pipeline name → Next

Select source provider as GitHub → Connect to GitHub → Select the repository of our project → Select branch → Next

Select the build provider as AWS CodeBuild → Create a project with the project name, operating system and also tick the below field

 

Select the Deploy provider as AWS ECS → Select cluster name and service name.

Create Pipeline.

Click on the pipeline and we can see the process

Here the image is pushed into the AWS ECR repository and deployed this image into AWS ECS. After the deployment is completed move to the AWS service inside our cluster and click on the task we get a public IP, copy this IP, and search into a browser.

How to deploy a spring-boot Application into AWS ECS via GitHub Actions

This blog describes deploying a spring-boot Application into AWS ESC via GitHub Actions. We will build a sample spring-boot Application, push the image to AWS ECR and then deploy it to AWS ECS.

Prerequisites:
1. AWS Account
2. GitHub Account

Create an AWS ECR repository

Login to your AWS account and create an Amazon Elastic Container Registry repository with a name.

Create a cluster inside AWS ECS and select the cluster template as “Networking Only” because we use AWS FARGATE here.

Create a Task Definition

Select launch type as FARGATE, select task role as “ecsTaskExecutionRole”, Select your operating system, select Task memory and CPU, and Add container with container name and the image should be the ECR repository URI.

Create a service inside the cluster

Click on Clusters then select our cluster and create a service with a service name and select launch type as FARGATE.

Create a task-definition.json file on the Project repository

Go to the project repository, create a new file “task-definition.json” and the file content will get from ( Task Definition –> select your task –> click on the Task name –> Get a JSON file and copy the file content and paste it into the “task-definition.json” file.)

Create the workflow

Add a YAML file to your repository (/.github/workflows/filename.yml)

name: Build and Deploy to AWS

on:
  push:
    branches:
      - main

jobs:
  setup-build-publish-deploy:
    name: Setup, Build, Publish, and Deploy
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      # Setup JDK 1.8
      - name: Set up JDK 1.8
        uses: actions/setup-java@v1
        with:
          java-version: 1.8
          server-id: github
          settings-path: ${{ github.workspace }}

      # Build 
      - name: Build and Test with Maven
        run: mvn -B package --file pom.xml

      # Configure AWS credentials
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ap-south-1

      # Login to Amazon ECR
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      # Build, tag, and push image to Amazon ECR
      - name: Build, tag, and push image to Amazon ECR
        id: build-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: springboot
          IMAGE_TAG: develop
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
      # Push the new image ID in the Amazon ECS task definition
      - name: Push the new image ID in the Amazon ECS task definition
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
          task-definition: task-definition.json
          container-name: springboot-container
          image: ${{ steps.build-image.outputs.image }}

      # Deploy Amazon ECS task definition
      - name: Deploy Amazon ECS task definition
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def.outputs.task-definition }}
          service: springboot-service
          cluster: springboot-cluster
          wait-for-service-stability: true

Edit the pom.xml file and add the line

It indicates that the name of the generated artifact will be spring-boot-ecs. This name is often used to create a JAR or WAR file, which can then be deployed to a server or container, such as Amazon Elastic Container Service (ECS), to run a Spring Boot application

Create a Dockerfile inside the project repository

FROM maven:3.8.5-openjdk-18-slim AS build

WORKDIR /usr/src/app

COPY . /usr/src/app

RUN mvn package 

FROM openjdk:18-jdk-slim

EXPOSE 80

ARG JAR_FILE=spring-boot-ecs.jar

WORKDIR /opt/app

COPY --from=build /usr/src/app/target/${JAR_FILE} /opt/app/

ENTRYPOINT ["java","-jar","spring-boot-ecs.jar"]

Then click on Action

Here the image is pushed into the AWS ECR repository and deployed this image into AWS ECS. After completing the deployment go to the AWS cluster, click on Task and get a public IP, copy this IP, and search into a browser.