What is our primary use case?
We pull in information from cloud resources like AWS and Azure, and we just recently got into GCP. Just pulling data directly from there was a little bit easier than trying to do it from on-prem. We can now do that a little easily.
We have a lot of cases where business units that were not even in Splunk got compromised for whatever reason. We could get security logs from those and import them directly, more quickly, and easily with Splunk Cloud. We have had several use cases directly with that. In our company, we do not monitor logs from laptops. We have had issues with users getting compromised on our laptops. We could get the data logs from there.
I also use it to monitor my universal forwarders so that I can see what versions they are on. We had CVEs coming out on the universal forwarders. We had to replace them. I have dashboards to keep track of our progress as we are migrating and upgrading all those agents.
The biggest, heaviest use of Splunk Cloud Platform for us right now is people going and looking at our firewall logs to find the denies and to find out which firewall is being blocked. We are a medium-sized company. We are so segmented with all the PCI and SOC 2 compliance audits that we have. We have segmented everything. We have so many firewalls that there is always another firewall down the line that is blocking. The firewall team is in there every day and all day long, and then we have other teams that go in there to see if the issue that they are having with their app is a firewall issue or not.
How has it helped my organization?
I have done health checks several times now, and those have been very valuable in getting more information about what is going on in my platform. There are also recommendations on what is going on in my environment. Sometimes when it says something, I already know that, and when I explain why, it knows that I am aware of it. It knows that it has to be that way for compliance reasons or there are certain break glass accounts that we have to have in case our Okta is offline. It points out things like that.
One of the things we had to do was find out how much Splunk on-prem was costing us because we had so many different groups. We had the storage group, and then we had the hardware team. The indexers and the search heads were physicals. That was being handled by the data center teams, which bought all the hardware, and then we had the virtual servers. Everything else was virtual. That was still owned by us, which is fine, but then we had storage, so we did not know the full cost. As I am trying to migrate from one data center to another, the teams do not want to buy. They do not want to migrate hardware. They want to buy new hardware, which, of course, is a cost to their department. They are a group but not our group, so we wanted to go to Splunk Cloud. We had to first find out how much the total cost of Splunk was for our company so that we could show that moving to Splunk Cloud was going to save the company money, which it did. It saved at least a million dollars a year. We are oversized in some areas, and we are running pretty close in the other areas. It is saving us money in the long term.
We monitor multiple cloud environments. We have data in multiple clouds. We have AWS, Azure, and GCP, as well as our own on-premise that is technically a cloud or our own personal private cloud. We are a cloud customer for our clients. We are in four different environments. It has been fairly simple to monitor multiple cloud environments using Splunk Cloud Platform. The documentation and the TAs have been updated and tell you which piece is what. You see no difference between a client ID, tenant ID, a secret, a key, and the tokens. That has been very handy. We had an incident where there was an S3 bucket somewhere, and one of our teams was unable to communicate with the Cloud Infrastructure team. It was set up as a file share only instead of another type, which was not available in the TA. That was not an option, so that became a challenge. We had to work with them, and they basically had to rebuild that bucket because you cannot just add it as a function to that bucket. They made a whole new bucket and put the logs in there. That was a challenge, but other than that, it has been very smooth and easy. We have had teams that had incidents. They took all the data and put it into an S3 bucket, and it took that right in.
Splunk Cloud Platform has helped reduce our mean time to resolve because they can get the data in faster. I have even automated things. We have a Python script. I can take CSV files and send them to the endpoint and just pop them with all the data they need to do their evaluations, such as if they went to bad sites. They can see all that information. I can get that in quickly. With on-prem, I could do that, but it had to run through so many hoops because of the PCI requirements that our company has. It is still PCI-compliant, but it is just so much easier to work with. I know we have had mean times of 60 days. We are reducing it to one or two weeks now, so it is getting a lot better.
Splunk Cloud Platform has helped improve our organization’s business resilience. That was something with which I have had issues with the on-prem. I have had issues with an index. It could be a hardware issue, a software issue, or an OS issue. By having Splunk Cloud Platform, everything has been a lot more stable. I do not have as many worries or problems there. I have fewer things. I can even troubleshoot on my side if it is a heavy forwarder. That is on me, but there are a whole lot fewer things to look at and worry about. It took away a lot of headaches.
In terms of Splunk’s ability to predict, identify, and solve problems in real-time, real-time is a touchy word because being real-time means you are indexing directly. There are a few people in my company who have or are allowed real-time access, but it is pretty close. It is pretty much within seconds. You have access to all that data, so it has been handy. I had to explain to the teams how searches work in the background. If you are running a search every 5 minutes, it sounds great, but if there is any kind of delay in the data, you can miss something, so 15 minutes is a little better, but still, you are seeing things within minutes and getting alert about them. We connect to Microsoft Teams and Slack. We are sending things to ServiceNow for the monitoring team. It is 24/7, so if they need something to watch 24/7, there is a group. They are now tied into ServiceNow, so they can get all that data right there in one place for that team, pulling it from different monitoring tools besides Splunk. It is handy to be able to just pop it all in there quickly.
The firewall stuff is huge. Everybody is in there. All day long, people are hitting that dashboard searching for firewall blocks or denies. Sometimes, they access it just to see if it is connecting because we do drop a lot of data. A great thing about Splunk is that we can drop some of the data if we need to when it is ingesting. We do not keep all the connects, but we can see whenever a connection is closed. We can see that the connection had been made successfully and then closed. We are able to see that one way or the other. We can see whether things are being blocked or it is able to connect. That information is handy now. We have a complex network, and there are times when we have routing issues. We can see that there is no route in the logs and say that it is a routing issue. They then bring the network team. The firewall is the front point for all that, but the network team has to work closely.
What is most valuable?
Just the fact that it is cloud-based is valuable. We are still on the classic one. I am waiting for the VE to come to the GCP. That is where our stack is. It is in GCP. They say it is coming somewhat soon. We will see when that is.
There is the flexibility of not having to manage all the indexes and searches myself. I was doing that with on-prem before. That was quite a bit of work. When you have an issue with an upgrade, you have to upgrade all of that. They are handling that on the backend now. I still have to do my heavy forwarders and my deployment servers, but it is a much lighter load for me on my end as an admin.
What needs improvement?
For one of the areas I am working on right now, they did an update this week which gave me back something. It was a feature that I have been using, but they took it away last conference. They just gave it back to me now, and I had to go through the setup again to make it work with our Okta. We have had issues with the maintenance windows. Sometimes I get informed about those at the last minute. They are getting better about informing us when they are going to do maintenance, but there were times when they did maintenance, and then I came in the next day and something was broken. They have gotten a lot better about that. I am still working on a couple of issues. They have cases open for them, so they know about them. They are working on them. The communication is getting better. That was an area that had a lot of feedback. I can see that they are accepting the feedback and taking it to heart, which is great.
Some of the Victoria Experience that was rolled out is not yet fully everywhere.
The AI assistant is going to be good, but we are on GCP, so I am worried about how fast it is going to get rolled out and if it is going to be nine months late for the GCP customers or not. That would be a bad thing because that would put a black eye on the whole marketing part of that. The same thing is with the Victoria Experience. They already have a black eye on that one. It has been two years since it came out and they still do not have it on GCP, so they need to get that fixed up. I would like to see the AI assistant feature as it rolls out. That helps with me wanting to roll out ITSI and the O11y suite with them bringing that AI assistant over there. I have teams right now that hit me up. They have been using some kind of AI assistant. We have Microsoft CoPilot. It is allowed in our company now. They tell us not to use ChatGPT right now because it is not approved for whatever reason. I have had some of our people hit me up who are not Splunk users but they have access to some dashboards and want to do a little bit of searching. If they use generic AI to find out how to do a generic Splunk search, it is not going to work in my environment at all. They will wonder why this is not working. That is because the AI does not know our environment. It will be handy to have an AI assistant that knows our environment.
Buyer's Guide
Splunk Cloud Platform
October 2024
Learn what your peers think about Splunk Cloud Platform. Get advice and tips from experienced pros sharing their opinions. Updated: October 2024.
814,763 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Splunk Cloud Platform for a year and a half.
What do I think about the stability of the solution?
It has been quite stable. The fact that we are on GCP has been causing some pain. That is the only thing.
What do I think about the scalability of the solution?
That has been very nice. When we renewed our last contract, we had seen that our long-term storage or archive storage was not enough, so we had increased it. It is nice to have enough visibility. It tells you that you are getting close to over or you are over, so you can see where you are. The new improved monitoring console that just came out has more information in there for that. That to me is even more valuable, so I am happy to see the new console they have released.
How are customer service and support?
For the most part, their technical support has been pretty handy. Sometimes you get someone a little bit newer, and they may ask some basic questions because they do not know our knowledge level. If we are putting a case in, we have already tested steps a, b, and c. We have already tested all those, and we already know. We would not put the case in otherwise. However, in some of the cases, you get in there, and they immediately bump it up to the next level. They can recognize and see quickly that it is a problem, and they are able to bump it up. I like the fact that they are able to do that somewhat quickly and escalate things a little faster than in the past when we were on-prem. With us being on Splunk Cloud, they are able to see the issues faster and verify them faster. I would rate their technical support an eight out of ten. They are doing pretty well.
When it comes to customer service, the only issue we have seen is that they changed the sales team three times in the last two years. That has been frustrating. I meet them all at Splunk conferences, and I feel like half the Splunk people there know who I am because they have been our support team for some reason or another. Their teams are great, but it takes time. There is a transition time for them to get everything moved from one person to another because they have to finish up the team that they were with while adding in the new team that they are moving to. I understand that it takes time, but it is getting frustrating on our side. They can give us at least a year before they switch the team again.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We had used Enterprise Security before, but one team was using Splunk core with their own built-up dashboards and other things. They were not using the Enterprise Security pieces and parts specific to that, so we decided to not use that temporarily, but it might return because whatever they have switched to is not particularly helpful. It is not as helpful as we were hoping.
How was the initial setup?
We worked with a third-party provider. We were in a bit of a hurry to get it done. We were able to do it quickly.
Because we were getting GCP, we were getting help from Google, and they ended up paying for the service provider who was helping us migrate. We paid for it upfront, but then Google paid it back to us as a part of the contract we had with them. The good news was that we were able to get it done quickly, but it was quite a rush to do that. It went fairly smoothly. There were a few blocks, but we were able to migrate.
It took us a full six months to move from on-prem to cloud. Moving the data took me a couple of days, but getting everything fully migrated and tested and making sure that all the teams were fully in there took a full six months, which for our company was pretty much lightning speed. It normally takes two to three years or something like that.
What about the implementation team?
We had a Splunk partner called TekStream.
What was our ROI?
We are seeing cost efficiencies with the move from on-prem to the cloud. We found out how much on-prem was costing us. It is not just the cost of the storage or the hardware. There is also the cost of the time of those people who do the setups of all that. We definitely saved quite a bit of money.
We have greatly seen an ROI. We have been able to add more and more data that we were dropping before because we did not have the license. We started opening that up. We have some more events from Windows event logs and some more things related to the firewall. We do not have to drop all that. We can bring some of that in now.
What's my experience with pricing, setup cost, and licensing?
We were on ingest. We were on-prem, and when we switched to the cloud, we went to an SVC model, and that has been a huge help. We are now able to ingest more data than before. I was known as Doctor No because I had to say no so many times because we were on an ingest model and we were maxed out. I am not that way anymore. A lot of times, our use cases are one-shot because security needs the data. With our SVC model, we do not worry about it as much. I know that it is saving us huge amounts of money because of the SVC model.
Which other solutions did I evaluate?
Unfortunately, we did not evaluate any other tools, and that was the issue. We were handed down a tool to use, and that is something that our team did not like, and we have made that very clear. That is why we say that Enterprise Security might come back. We will see.
What other advice do I have?
End-to-end visibility is something that we are working on. I have talked with the Gigamon vendor. We have Gigamon to do packet captures, but we want the metadata from that to come into Splunk so that we have longer retention times at least on some of that metadata. We do not necessarily have the package, and that is okay, but we can at least see the trending of some of the things a little bit longer than we are currently. It gives more visibility to more teams. I have 350 users in my Splunk Cloud Platform. On the network side, we have the network teams with 20 to 30 people looking at things over there, so it gives visibility into more of the organization. That is one of the big benefits. We can see the network layer and then all the way up to the App layer. When we want to get the O11y suite, we already have AppDynamics. We will be integrating that pretty soon. It will probably be the next month when we get that integrated in. The other piece is going to be getting the network cleared up. We are also seeing issues with GCP with some applications that we have migrated there. We will be able to see whether it is a slowdown in the cloud provider or not. Having this visibility and the end-to-end data and being able to correlate it is pretty helpful.
Splunk's unified platform can help consolidate networking, security, and IT observability tools. That is what we are working towards, and that is exactly what we are hoping for. I am hoping to bring in ITSI and the O11y suite. We already have AppDynamics. We are going to be able to pull that in which will start helping with that full visibility, but to fully integrate that, I am going to bring the O11y suite as well because eventually, I see AppDynamics moving in that direction.
I would rate Splunk Cloud Platform a nine out of ten because it is very good. It is pretty stable.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Disclosure: I am a real user, and this review is based on my own experience and opinions.