≡ Menu

Linux

Splunk vs ELK

If you are in IT Operations in any role, you have probably come across either Splunk or ELK, or both. These are two heavyweights in the field of Operational Data Analytics. In this blog post, I’m going to share with you what I feel about these two excellent products based on my years of experience with them.

The problem Splunk and ELK are trying to solve: Log Management

While there are fancier terms such as ‘Operational Data Intelligence’, ‘Operational Big Data Analytics’, ‘Log data analytics platform’, the problem both Splunk and ELK are trying to solve is Log Management. So, what’s the challenge with Log management?

Logs, logs, logs and more logs

The single most important piece of troubleshooting data in any software program is the log generated by the program. If you have ever worked with vendor support for any software product, you have been inevitably asked to provide – you guessed it, Log files. Without the log files, they really can’t see what’s going on.

Logs not only contains information about how the software program runs, they may contain data that are valuable to business as well. Yeap, that’s right. For instance, you can retrieve wealth of data from your Web Server access logs to find out things like geographical dispersion of your customer base, most visited page in your website, etc.

If you are running only a couple of servers with few applications running on them, accessing and managing your logs are not a problem. But in an enterprise with hundreds and even thousands of servers and applications, this becomes an issue. Specifically,

  1. There are thousands of log files.
  2. The size of these log files run in Giga or even Terra bytes.
  3. The data in these log files may not be readily readable or searchable (unstructured data)

Sources_of_logfiles (4)

Both Splunk and ELK attempt to solve the problem of managing ever growing Log data. In essence, they supply a scalable way to collect and index log files and provide a search interface to interact with the data. In addition, they provide a way to secure the data being collected and enable users to create visualizations such as reports, dashboards and even Alerts.

Now that you know the problem Splunk and ELK are attempting to solve, let’s compare them and find how they are achieving this. I’m going to compare them in 4 areas as follows:

Technology

Cost

Features

Learning Curve for the operations team

Got it ? I can’t wait to share. Let’s dive in.

Splunk_image

VS

ElasticSearch Logo

Technology

Witnessing C++ vs Java has never been more exciting

While Splunk is a single coherent closed-source product, ELK is made up of three open-source products: ElasticSearch, LogStash, and Kibana.

Both Splunk and ELK store data in Indexes. Indexes are the flat files that contain searchable log events.

Both Splunk and ELK employ an Agent to collect log file data from the target Servers. In Splunk this Agent is Splunk Universal Forwarder. In ELK, this is LogStash (and in the recent years, Beats). There are other means to get the data in to the indexes, but the majority of the use-cases will be using the Agents.

common_components

While Splunk uses a proprietary technology  (primarily developed in C++) for their indexing, Elastic Search is based on Apache Lucene, an open source technology written fully in Java.

On the Search interface side, Splunk employs a Search Head, a Splunk instance with specific functions for searching. ELK uses Kibana, an open source data visualization platform. When it comes to creating visualizations, in my opinion, Splunk makes Kibana look plain. (Note: It is possible to use Grafana to connect to ELK to visualize data. Some believe Grafana visualizations are richer than Kibana). With recent versions of Kibana, you also have Timelion, a time series data visualizer.

For querying, while Splunk uses their proprietary SPL (Splunk Porcessing Lanugage, with syntax that resembles SQL-like statements with Unix Pipe), ELK uses Query DSL with underlying JSON formatted syntax.

Let me summarize the technical info in the table below.

Screen Shot 2017-12-08 at 6.06.27 AM.png

In the end

Both Splunk and ELK are fundamentally very sound in Technology. Though one can argue one way or the other, the longevity  of these two products in the marketplace prove that they both are indeed superior in their own way. However, Splunk differs in the crucial Schema on read technology.

With Schema on read, there is minimal processing required before indexing. In fact you can throw anything at Splunk as long as Splunk can determine the Host, Source (File where data is coming from) and Source Type (a meta field that helps Splunk to determine the type of the log file, manually/automatically determined).  The fields are generally determined ONLY at the search time.

However, with ELK, you must provide the field mapping ahead of time (before indexing). One can certainly argue that this is not necessarily bad. But I’m going to leave it up to the community to decide.

Cost

Is Open-Source really free?

Cost of the software: ELK is free. Splunk is not.

Splunk’s license fee is based on Daily Log Volume that is being indexed.  For example, you may buy a 1TB license which will let you ingest up to 1TB per day. There is no cost for keeping the historic data. It is only the daily volume that counts (the License Meter resets at midnight every day). Further the cost is NOT based on the number of users or number of CPU cores either. You can get either a Term license, which you pay per year. Or you can get a perpetual license, which is just one time fee plus annual support fee (if any).

I’m unable to give you a dollar figure as it greatly various based on geographic location of your business, and obviously on the data volume (and the sales team you are working with :-)). But in general, compared to other commercial products in the market (SumoLogic, Loggly, Sematext etc), Splunk *may* be on the expensive side. (Again, too many variables to give you a black-and-white answer).

ELK is open source. You pay nothing for using the software.

But, and this is a big but, the cost of ownership is not just the cost of software. Here are other costs to consider.

  1. Cost of Infrastructure. Both Splunk and ELK would require similar hardware infrastructure.
  2. Cost of implementing the solution. This is a big one. For example, when you purchase Splunk, you might get some consulting hours that you can use to implement your solution. With ELK, you are on your own.
  3. Cost of ongoing maintenance: This can also be a big one. Once again, you might get some support hours from Splunk, but with ELK, you are on your own.
  4. Cost of add-ons and plugins: Both Splunk and ELK have plugin/add-on based solutions that extend the fuctionality. Some are free and some are not. For example, you will have to pay for Shield (ELK Security) and ITSI (Splunk IT Service Intelligence)

In the end

Yes, Open source is free. But is it free, as in free ? The biggest problem you will face, as an evangelist of ELK in your organization, is coming up with a dollar amount of the cost. As for Splunk, you have to be able to convince your organization of the cost. At least in this case, the cost is predictable.

Features

Looking for somethin? There is an app for it.

Both Splunk and ELK have myriad of features. When I say feature, it can be any of the following:

  1. Support for a certain type of data input. For example, does it allow data input via HTTP, or a script ? So, earlier when I said both Splunk and ELK employ an agent to collect data, I lied. Both the products support several other means of getting data in.
  2. A data visualization functionality. For example, does it allow creating custom dashboards, reports, etc? How feature rich are they ?
  3. Integration with other products/frameworks. For example, can it send/receive data from APM products such as NewRelic, Dynatrace or AppDyanmics ? Can it send/receive data from Hadoop ? Both the products integrate well with many major platforms.
  4. Security Features: Does it support Role Based Auth, two-way SSL or Active Directory Integration? With Splunk, security is available out of the box. But with ELK, you pay for Sheild (or Xpack in the recent versions)
  5. Data manipulation: How easy is it to modify the data being ingested? Can I mask sensitive information readily? Splunk provides powerful regular expression based filters to mask or remove data. Same can be achieved with Logstash in ELK world.
  6. Extensibility: Can we easily extend the product by writing our own solutions?
  7. Metrics Store: Indexing text (log files) is one thing. But indexing Metrics (numerical data) is another thing. The performance of indexing and search is astronomically higher on a Time series Metrics index. Splunk has introduced this in their Version 7.
  8. Agents Management: How are you going to manage hundreds or thousands of Beats or Splunk Universal Forwarders ? While ansible or chef can be used with both the products, Splunk has an advantage of letting you manage the universal forwarders using their Deployment Manager (A Splunk instance with a specific function)

In the end

Since both ELK and Splunk have a strong user community and good extensibility, there is no shortage of plugins and add-ons. (In Splunk world, there is the notion of apps).

splunkbase.splunk.com (https://splunkbase.splunk.com/)

Screen Shot 2017-12-10 at 11.58.31 AM

Elastic Search Plugins (https://www.elastic.co/guide/en/elasticsearch/plugins/current/index.html)

Screen Shot 2017-12-10 at 4.42.51 PM

Learning Curve for the operations team

From 0 to 60mph in 3 seconds. Really?

Both Splunk and ELK have massive product documentation. Perhaps too much documentation if you want to get started rapidly. The learning curve for both the products is steep.

For both the products, a solid understanding of Regex (Regular Expressions), Scripting (Shell/Python and the like) and TCP/IP is required.

For performing searches, you must learn SPL (Splunk Processing Language) for Splunk, and Query DSL for Elastic Search. SPL is like unix pipe + SQL. It has got tons of commands that you can use. With Query DSL, the queries are formatted in Json. In both the products, the search language can easily overwhelm a new user. Just because of the sheer amount of features that SPL provides, Splunk can be much more intimidating than ELK. (In fact, there are 143 search commands in SPL in their Splunk Enterprise 7.0.1).

Creating visualizations also require some learning. Here again, Splunk proivdes more features and might look more intimidating than Kibana for a new user. Note that you can also  use Grafana to connect to ELK to visualize data.

Perhaps the biggest hurdle you will face with Splunk is the server side Configuration and Administration. The entire product is configured using bunch of .conf files. One will need intimate knowledge of the specification of these configuration files in order to implement and support Splunk. While ELK does require some reading on server side setup, it’s not nearly as much as Splunk.

In the end

Spunk does have a steeper learning curve compared to ELK. But whether it is a showstopper for you or not is something you have to decide. You will have to invest in few resources with solid Splunk experience if you want to implement and support the solution correctly.

So, there you have it. Splunk Vs ELK at a super high level. I haven’t gone deep in technical aspects for brevity. But there is plenty of documentation for both Splunk and ELK online. Just get lots of coffee before you begin 🙂

Let me know what you think.

Happy Monitoring !

 

How to enable colors in shell and vi in Mac?

When working with shells, if your Mac does not show various colors automatically, you can enable them by two easy steps.

First add a line as shown below to your .bash_profile. This file should be under your home directory.

export CLICOLOR=1

Second, add the following line in your .vimrc file. This file should be under your home directory. If it is not there, create one.

Read More

What is Virtualization?

Virtualization is a technique using which you can run multiple Operating Systems (aka Guest) in a physical server (host) by abstracting (or virtualizing) CPU, Memory, Disk and Network resources. The core component of any virtualization solution is Hypervisor – the software that performs the abstraction of bare metal resources.

Here are the primary benefits of using Virtualization:

  1. Save cost on hardware
  2. Centrally manage the infrastructure
  3. Add effective fault tolerance and high availability
  4. Dynamically update the infrastructure

The diagram below shows virtualization at a high level.

Read More

5 reasons why you can’t afford NOT to Virtualize

The verdict is in. Virtualization is the future. If you are still running your applications on bare metal, you are missing out on tons of benefits, or even hurting your business. Virtualization is a software tech that lets you run multiple operating systems and applications on a physical server by abstracting the hardware underneath. Among several Virtualization software makers, the following are considered leaders:

VMWare (ESXi)

Citrix (XenServer)

Microsoft (Hyper-V)

Let’s dive in to 5 reasons why you can’t afford NOT to virtualize (not necessarily in any order)

Read More

How to use AppDynamics to monitor Server health?

Yes, AppDynamics is awesome for Application monitoring – Java Heap, deep transaction tracing, tons of out-of-the-box framework monitoring (JDBC,WebService etc) and the list goes on. But do you know Appdynamics can be used to effectively monitor Servers too, whether it is virtual or physical? When I say server, I mean the host operating system such as RedHat Enterprise Linux, Windows 2012, Solaris etc. Let me show you how you can do this.

Enter AppDynamics Machine Agent

While Java can be monitored using a Java Agent, a Server can be monitored using a special type of agent called Machine Agent. You will have to have license to run these agents (When you purchase Application agents, typically AppDyanmics throws the same number of Machine Agents, and so you should be good in terms of additional cost). If you are not sure about your present licensing situation, click on ‘licensing’ in your Controller UI as shown below.

Unlike Application agents which run inside the JVM/CLR, Machine agent is a standalone Java program that runs on the host operating system. It collects hardware metrics and sends them to Controller (once a minute). A user can view these metrics via Controller UI. Pretty simple, hah?

Read More

How to install Apache Web Server using Yum?

Software installation has never been more fulfilling since YUM came along.

YUM is the most popular rpm based interactive package manager. It is super powerful and reliable.

In this quick article, I show how to install apache web server, the world’s most popular web server in your Linux Server.

You need to have root access to do this. You also need to have internet access on the server you are installing apache.

Simply run the command

sudo yum install httpd

That’s it. Yum does the rest.

Once done, which takes about 10 seconds, start the httpd server

Sudo service httpd start

Once the service starts, simply use a browser to access the server (use the default server name or the ip address).

Or you can choose the geeky way to use curl to test out. I created a basic html file. Here is how to access it

Curl http://localhost/index.html

You can check access.log to see how you did. Access_log is present under /var/log/httpd/

The configuration file for apache (httpd.conf) will be under /etc/httpd. (This depends on your platform).

That’s it. Your own enterprise grade Web Server, up and running in about 10 seconds.

Way to go YUM !!

Buckle up! You can get your own AWS server in cloud. You can run a verity of OS on it, connect to any popular Database you want to and even get your hands on some of the coolest products from AWS. I don’t know about you but I’m psyched about all this.

apm

Yes, Amazon Web Services Free Tier allows you to have your own Server in EC2 for 12 months. If you are new to AWS, this is a great way to get your feet wet, or may be drenched.

This article shows exactly how you sign up and crank up your own server in cloud. It takes about 15 to 20 minutes to get your hands on a brand new Amazon Linux instance (or Windows or Suse or RHEL…..)

Without further due, here are the actual steps to follow.

Read More

Introduction to APM: Benefits of APM

So, what can an APM tool buy you? Setting aside the hypothetical ‘Peace of mind’ marketing pitch, let me show you how exactly an APM tool can help you support your Application effectively

1. Historic Monitoring of Key Metrics

APM tool can record the monitoring metrics which are invaluable in troubleshooting. For example, take a look at the ‘response time’ graph of a particular application. You can readily see that the application suffers during business hours.

Historic

Read More

Introduction to APM (Application Performance Management)

Back in the 90s when I was working as a Solaris/HP-UX Administrator, all I needed was two or three commands to figure out what was wrong with a particular Server or Application. I will just glance at ‘vmstat’, ‘iostat’ and ‘top’ for a minute or two and the problem will reveal itself clearly. While those command still prove valuable at a certain level, in order to answer ‘Why is the application slow’ you need much more than just few OS commands.

Read More

The most important JVM memory tuning parameter- and how to ace it.

One of the deadliest errors you can encounter in your Java application is ‘OutOfMemory’ error. The impact this error can have in your application and your business can be enormous. I have seen organizations spend countless hours struggling to fix this error while continuing to have degraded end-user experience (a sure-fire way to lose credibility). One company was actually restarting their application every three hours just to avoid an ugly hang due to OOM. How sad?

In this article, I would like to point out the most important tuning parameter in your Java application. If you get this wrong, no matter how much tuning you put in, you will end up with an OOM sooner or later. Note that there are tons of tuning parameters available to tune the memory. Most of these may not have any effect at all, and some have game-changing effect. The parameter I’m about to reveal is of later kind and the mother of all parameters. This should be the first parameter you should be analyzing to zero-in.

Without further due, here it is.

Read More