Skip to content

udaydabbeta/Kubernetes_log_collector

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KUBERNETES SYSTEM LOG COLLECTOR

This collector is designed to collect realtime Events and Metrices generated by kubernetes system, Pod Logs generated by all the pods that are running on that system and give freedom to store these data in Elasticsearch on your cloud or on local system.

Features

  • One step deployment using shell script.
  • Freedom to installing each component separately. If you don't want all components.
  • Freedom to store data on Local or on Cloud Elasticsearch.
  • Collects Logs, Metrices and Events Generated by Cluster.

Getting Started

Prerequisite

  • Docker url
  • Kubernetes cluster (Minikube) url
  • Helm version 3 url
  • python3 url

Install all components with default configuration

It will deploy five pods on kubernetes for performing different operations such as fetching kubernetes data, storing it and performing different operation on generated data.

  1. node-exporter pod which generates prometheus logs.
  2. Elasticsearch pod which will be use to store data generated by all components.
  3. kibana pod to perform different operations on data.
  4. logcollector pod which fetches Pod logs and Event logs and store these into Elasticsearch.
  5. Metriccollector pod which fetches Metric logs and Prometheus logs generated by node-expoter and store these in Elasticsearch.

./agents.sh

Install Specific Components

  • Logcollector (Pod Logs and Events) with Elasticsearch

  • ./agents.sh -l

    • This will deploy Logcollector, Elasticsearch and Kibana pods.
    • The logcollector pod will fetch logs generated by all the pods and Events occured in kubernetes system then store these under podslog and eventslog indicies respectively in Elasticsearch.
    • You can easliy view these data using kibana.
  • Metriccollector (Metrices) with Elasticsearch

  • ./agents.sh -m

    • This will deploy Metriccollector, Elasticsearch and Kibana pods.
    • The metriccollector pod will fetch Metric logs(CPU and Memory uses in percentage) generated by each pod and Prometheus pod will fetch all the logs generated by prometheus then store these under metriclog and prometheuslog indices respectively in Elasticseach.
    • You can easliy view these data using kibana.
  • Elasticsearch with kibana

  • ./agents.sh -e

    • This will only install Elasticsearch and Kibana pods.

Access Components After Deployment

Run minikube ip to get your minikube ip address, which will help you in accessing different components.

  • Access Elasticsearch
    • http://minikubeip:32000
  • Access Kibana
  • http://minikubeip:32002
  • Access Prometheus
    • For accessing prometheus you need to get the URL on which application is running then you need to forward the port to access it on your host system.
      • export POD_NAME=$(kubectl get pods --namespace aiops -l "app=prometheus-node-exporter,release=node-exporter" -o jsonpath="{.items[0].metadata.name}")
      • kubectl port-forward --namespace aiops $POD_NAME 9100
    • After above steps, prometheus will be available on
      • http://localhost:9100

Uninstall

Uninstall All Components

  • ./uninstall.sh
  • It will remove Elasticsearch, kibana, node-expoter, Logcollector and Metriccollector.

Uninstall Logcollector

  • ./uninstall.sh -l
  • It will only remove Logcollector.

Uninstall Metriccollector

  • ./uninstall.sh -m
  • It will remove Metriccollector and node-expoter.

Uninstall Elasticsearch and Kibana (Only for Local)

  • ./uninstall.sh -e
  • It will remove Elasticsearch and Kibana.

About

Collects kubernetes logs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 71.2%
  • Shell 26.5%
  • Dockerfile 2.3%