Cloudhub Log Management (CLM)

INTRODUCTION:

CLM delivers heterogeneous and highly scalable log management with intuitive, actionable dashboards, sophisticated analytics and broad with third-party extensibility. It provides deep operational visibility and faster troubleshooting within the applications.

It widely collects the data across the different applications and with help of third-party tools will display the log contents.

 

IMPACT:

Currently, we don’t have any option for viewing the OLD logs in the cloudhub. So, if me the log memory is increased then automatically the old logs are deleted. In that case of validation, we could not find the logs for back up purpose.

The current implemented method will help to download the logs continuously and store in the location, where it could help us for checking the log’s easily if any defect raises and for checking the transactions this could even help us. And even help us in checking the end to end transactions happened between different applications by sending the logs to third-party tools like SPLUNK and ELK.

 

Two mains things that this application will take up.

-> Back up for the Log files – Which will help for validating the transaction while any defect raised for old/new transactions if log files are removed in cloudhub.

-> With help of third-party tools like SPUNK and ELK we can view/validate all the transactions happening between the applications and create the dashboards etc.

 

PRESENT VERSION:

1.0

ARCHITECTURE:

CLM collects and analyzes all types of cloudhub log data. It will hit the Mulesoft API and get the log data as per the application we requested. It will collect all the application data and store it in Third party application.

architecture.png

For viewing and creating dashboards we will be using third party tools like Splunk and ELK for configuring Log management.

Steps that are taken care of design :

-> Initially with help of mule application, we will be hitting the Mule API for getting project/application specific logs.

-> The received logs are stored in the Amazon S3 location with JSON format.

-> With help of third-party tools like Splunk and ELK we have configured the instances with cronjob’s that will take the log files from Amazon S3 location to Splunk and ELK locations.

-> The received data to Splunk and ELK will display the data and we can create the dashboards with help of specific search we make.

 

SUPPORTED LOG FILE:

You can use CLM to analyze any unstructured or structured log files with help of third-party tools. Currently, we are using JSON data type for log management.

 

 HOW TO USE Splunk and ELK:

-> The data that received will be displayed in the Splunk and ELK  search. We can specify our search fields in the search bar to get the specific results.

-> After getting the results, we can use to create the dashboards and visualizing.

 

Appendix:

Below are the steps that are taken care in configuring third-party application on Amazon Servers:

  1. Splunk Configuration:

On AWS server we created a new instance with the ec2 user

  • Installing the Splunk on the server with Root user
  • As we have files in Amazon S3, so we have written a cron job to get the files from S3 location to our splunk log file location
  • Below are steps we followed to install and configure splunk

Try to be on root user to install

  • For untar the splunk

sudo tar xvzf splunk-7.1.0-2e75b3406c5b-Linux-x86_64.tgz -C /opt

  • got to /opt/splunk/bin — sudo ./splunk start –accept-license ( for first time )
  • To check the splunk status and enable port

sudo ./splunk status ( to get the running status

sudo ./splunk enable boot-start

  • To get the files from amazon s3 to splunk installed location

aws s3 cp s3://mule-log-affinity-forum/ /opt/log/ –recursive

  • Cron Job to run the scripts and display the log

crontab -e

0 0 * * * /opt/scripts/copyFromS3.sh >> /opt/scripts/script.log 2>&1

  • to view the list of cron jobs

crontab -l

 

 

ELK installation steps:

  • Update Java : sudo apt-get update
  • Install the updated version :

sudo apt-get -y install oracle-java8-installer

Step 1: Elasticsearch

·       Create the Elasticsearch source list

·       echo “deb http://packages.elastic.co/elasticsearch/2.x/debian stable main” | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

·       Update your apt package database:

·       sudo apt-get update

  • command to import the Elasticsearch public GPG key into apt:

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

  • Install the Elasticsearch :

sudo apt-get -y install elasticsearch

Step 2: Install Kibana

  • Create the Kibana source list:

echo “deb http://packages.elastic.co/kibana/4.5/debian stable main” | sudo tee -a /etc/apt/sources.list.d/kibana-4.5.x.list

  • Update your apt package database:
  • sudo apt-get update
  • Install Kibana with this command:

sudo apt-get -y install kibana (Kibana is now installed.)

STEP 3: Install Logstash

  • create the Logstash source list:
  • echo ‘deb http://packages.elastic.co/logstash/2.2/debian stable main’ | sudo tee /etc/apt/sources.list.d/logstash-2.2.x.list
  • Update your apt package database:

sudo apt-get update

  • Install Logstash with this command:

sudo apt-get install logstash

  • Logstash is configured to the cloudhub log files as below.

Created .sh file with below command

aws s3 cp s3://mule-log-affinity-forum/ /var/log/srcitps –recursive

  • added a cron job as below

crontab -e

0 0 * * * /var/log/scripts/copyFromS3.sh >> /var/log/scripts/script.log 2>&1

 

For accessing the Kibana:

URL: ServerIP:5600

Username & Password : kibanaadmin  & admin

 

 

How to configure the Amazon s3  on the Amazon servers as below

  • steps to install AWS on Ubutu to access the S3 location
  • apt install awscli
  • aws configure  à provice access key and secret key
Digiprove sealCopyright secured by Digiprove © 2019 Geeks 18

Be the first to comment

Leave a Reply

Your email address will not be published.


*