I run the function to review the usage for the team and for the organization itself.
We use this product internally and then some of our business relationships with the other businesses that we have, they get their data from our data. It's more for collaborative data reporting that we have with them.
The most valuable feature is the out of the box Kibana. You plug it in and start the basic analysis on the data out of the box. This also gives a quick way to check the data and the models to figure out what fits the needs.
There are a few things that did not work for us.
When doing a search in a bigger setup, with a huge amount of data where there are several things coming in, it has to be on top of the index that we search.
There could be a way to do a more distributed kind of search. For example, if I have multiple indexes across my applications and if I want to do a correlation between the searches, it is very difficult. From a usage perspective, this is the primary challenge.
I would like to be able to do correlations between multiple indexes. There is a limit on the number of indexes that I can query or do. I can do an all-index search, but it's not theoretically okay on practical terms we cannot do that.
In the next release, I would like to have a correlation between multiple indexes and to be able to save the memory to the disk once we have built the index and it's running.
Once the system is up, it will start building that in memory.
We need to be able to distribute it across or save it to have a faster load time.
We don't make many changes to the data that we are creating, but we would like archived reports and to be able to retrieve those reports to see what is going on. That would be helpful.
Also, if you provide a customer with a report or some archived queries, that the customer is looking at when they are creating, at first it will be slow while putting up their data or subsequently doing it. I want it to be up and running efficiently.
If the memory could be saved and put back into memory as it is, then starts working it would reduce the load time then it will be more efficient from a cost perspective and it will optimize resource usage.
I have been familiar with this product for approximately four years.
ELK Elasticsearch is stable.
It's scalable, but there are some limitations.
If you are scaling a bit too quickly, you tend to break the applications into different indexes.
The limitations come in when getting the correlation between the applications or the logs.
It is difficult to get the correlations once the indexes have been split.
We are using the open-source version, that is installed on-premises.
We have not worried about technical support, but the community is good.
Before ELK, we used another solution for internal usage, and also, we used Splunk for different use cases in a different organization altogether.
It wasn't a switch per se, it was a different organization with a different use case.
The initial setup is simple, not too difficult.
Getting the index, doing your models, and putting the data in, correctly, is done more on a trial and error basis. You have to start early and plan it well to get it right.
We are using the open-source version.
We are not looking into the subscription because it's on-premises in-house.
For anyone who is looking into implementing this solution, the only tip is to get your models for the type of actual use that you are looking at upfront in order to have a good run.
I would rate ELK Elasticsearch a seven out of ten.
You're right Ayesha. ELK stack is not for the faint of heart. One needs strong Linux admin skills and also to understand KQL, data structures, data pipelines, etc.
It is a very customizable product and if using an on-prem solution one needs to understand Sharding, Index Lifecycle management, etc.
Highly recommended.