What is our primary use case?
We're using Devo as an operations and security event management logging platform. We're shipping all of our log data and telemetry into Devo, including G Suite, Okta, GitHub, Zscaler, Office 365; pretty much all of our logging data is going into Devo. And we're using Devo to do some analytics and alerting and searching on that log data. The analytics are things like average, min/max, and counts on certain types of log data—performance metrics—for monitoring and uptime/downtime health.
How has it helped my organization?
Devo provides high-speed search capabilities and real-time analytics. Nowadays, everything is about the data analytics. Our infrastructure is many disparate things that have to work in unison to make something happen, and our security is various things, working in different ways, to make something happen. Being able to combine that data together and get real-time context and alerting and visibility into it is key. Prior, we'd have to go look in the G Suite log to find an authentication issue, and then we'd have to enrich that authentication issue with something from someplace else. Usually it would even be a separate person doing it. The old way of doing it was very problematic. Having one repository where the data is combined, and you can do the analytics and all the enrichments, saves a tremendous amount of time.
We benefit from the speed at which we can triage and troubleshoot things and get to the bottom of certain security events and issues. What used to take many minutes, and up to hours, to do, things like different API calls and gathering different data sources, is now streamed in real time as it happens, into Devo, and we can look at it.
As an example, I'm building profiles on analytics for GitHub, so that I know what normal access looks like for a GitHub repository and what abnormal access looks like for a GitHub repository. If someone modifies the GitHub repository in a way it shouldn't be changed, I know that right away.
I also know if someone tries to access some of our internal repos or other SaaS solutions, without being on our Zero Trust networking. Those types of things really start to stand out. It takes a large amount of data to make those work from disparate systems, and troubleshooting them can be very problematic unless you have that data in a centralized location. So the speed at which we can operate our security stack is something we've gained.
It saves us hours a day. It really depends on what we're troubleshooting, but it has saved me hours on just the stuff I need to do. There's definitely a cost savings.
It provides more clarity for network, endpoint, and cloud visibility because we're pumping all our data into it. We're pumping DNS traffic data, Zero Trust data from Zscaler, all of the authentication data from Okta, Google, and O365, as well as the endpoint data from our own product. We can query all that data in a centralized manner, and correlate it in a certain manner. But that's because we're putting the data into it. Confidence in the actions needed is about context. Being able to get the most context, before you do something or make a decision, is better. The context we can get from having everything centralized, by combining all those data sources together, gives us an understanding of the complete picture of the issue and how long the issue has persisted. Then we can make a better decision on how we're going to solve things.
What is most valuable?
I like their query language and I like their speed.
Ultimately what it comes down to for us is, "Can we write advanced queries that bind the different data sets together?" and that is what we're doing. We're able to do things like see an event, this IP or its DNS name here, and then search all our other log streams to also find it there, and then take data from there and search throughout other types of things.
What needs improvement?
There's room for improvement within the GUI. There is also some room for improvement within the native parsers they support. But I can say that about pretty much any solution in this space. Those are the standards where they need to improve because that's usually where they lag.
For how long have I used the solution?
We've been using Devo now for about six months.
What do I think about the stability of the solution?
There haven't been any major issues or hiccups since we deployed it.
What do I think about the scalability of the solution?
We bought a certain scale, a certain data-ingest-rate, and it's had no problem with that data-ingest-rate. From the PoC and the deep-dive we did, we know the system scales horizontally. We tested it. I'm quite confident that it can scale.
We're going to keep on throwing more and more data into it. After all the security data is in there, the next layer of data is going to be telemetry data from performance data. We'll monitor for things like network lag and system performance. The more operational data will be the next layer of data that goes in there, when we get there. That will probably be in the next three to six months. Right now we run on Elastic for the majority of that and we'll be looking at swapping over. It's just a matter of getting it planned out so there isn't an impact.
How are customer service and support?
Our experience with their technical support has been good, for the few times we've had to use it. We haven't had to use it very often, which is a good thing. Ben, who is my lead engineer, has contacted them and he has had no complaints. They've been responsive and answered. We also have them on Slack.
When talking about a customer-first approach, they're pretty good. They worked with us for the things we needed them to work with us on. They were understanding of timelines. They've been very forthcoming.
I have pretty high expectations, so I wouldn't say they have exceeded them, but they haven't disappointed me either. That's good. There are very few vendors about which I can say that.
Which solution did I use previously and why did I switch?
Prior to using Devo, we were using QRadar. We switched because when we looked at the data we wanted to throw at QRadar, it was going to fall over and blow up. The amount of money IBM wanted for that amount of data was absurd. It's a legacy system that operates and scales in a legacy way. It just can't really handle what we planned to throw at it, as we ramp up towards IPO, in our infrastructure.
How was the initial setup?
The initial setup was actually pretty easy. They give you something in a SaaS. You have instructions on how you start pointing data to it and the data starts going in there. Devo has the ability to auto-parse it in some way. It works well.
We were shipping production data into it, as part of our PoC, within a couple of days of starting. It didn't take very long.
Our implementation strategy was to identify the areas that had the most critical data that we wanted. We then went one-by-one through those areas and figured out how to get them into Devo, whether we were shipping them natively, API-to-API—like AWS—or whether we had to deploy the Devo collector, which was easy. The collector is just a VM, or an image. We deployed those images and started shipping the data in. Once the data was in, we started writing and tuning our own rule sets.
For the deployment, we had one SIEM engineer who was working on QRadar and I redeployed him on Devo. He had all of the data sources that were going into QRadar redirected into Devo within three or four days. He could have done it quicker if it wasn't for change management. It was really not an administrative burden at all to deploy.
As for maintenance, it's SaaS service. We're just running the SIEM as operators. We have a full-time guy who is a SIEM engineer, but a lot of his job isn't maintaining the tool. His job is more one of continuing to drive additional value out of the tool. That means writing more and more advanced rule sets, correlations, and analytics, more than anything else.
There are about 10 to 15 people who have access to Devo in our company, including security research people who are looking for trending there. Our IR and threat-hunting and security teams have access to it and our SRE team has access because we're also shipping some of our SRE telemetry into it.
What was our ROI?
We've seen ROI, just from the time savings alone. I can't say we have recovered what we spent on it, but our staff is absolutely spending less time doing certain things, and getting more things done within the time they have, using the tool.
What's my experience with pricing, setup cost, and licensing?
Devo's licensing model, given that they only charge for ingestion, is fine. It's risky to them, but I'm assuming they're going to manage that. If I'm ingesting a little bit of data, but I'm running a ton of queries on said data, it's going to be much more expensive for them. Whereas, if I ingest a ton of data and query every Nth period of time, then they will make more money off of it.
Support was included in our licensing.
Which other solutions did I evaluate?
We looked at Humio and Splunk. Splunk was too expensive, so we ruled them out right away. Devo was the only one we went all the way through the hoops with.
Devo is on par with Splunk. It's definitely farther ahead than Humio was. Splunk has more apps, more integrations, because it's been around longer and it's bigger, but ultimately the querying language is as useful. They're different, but there's nothing I can do in Splunk that I can't do in Devo. Once I learn the language, they're equivalent. There isn't anything necessarily better with Devo, but Splunk is kind of an old standard, when it comes to threat hunting.
Devo is definitely cheaper than Splunk. There's no doubt about that. The value from Devo is good. It's definitely more valuable to me than QRadar or LogRhythm or any of the old, traditional SIEMs. Devo is in the next gen of cloud SIEMs that are coming. I think Devo plans to disrupt Splunk, or at least take a slice of the pie.
I wouldn't say that Devo ingests more data compared to any other solutions. But the thing that Devo does better than other solutions is to give me the ability to write queries that look at multiple data sources and run fast. Most SIEMs don't do that. And I can do that by creating entity-based queries. Let's say I have a table which has Okta, a table which has G Suite, a table which has endpoint telemetry, and I have a table which has DNS telemetry. I can write a query that says, "Join all these things together on IP, and where the IP matches in all these tables, return to me that subset of data, within these time windows." I can break it down that way. That entity-based querying, where you're creating an entity that's complex, is much more powerful than the old legacy vendors. You can do it with Splunk, but with Splunk you have to specify the indexing upfront, so that it's indexed correctly. With Devo, the way it lays it out on disk, as long as you know what you want and you tell them what you want laid out on disk, it tends to work better.
I've been happy with Devo. They're a smaller company, so they're more hungry for your business than, say, a Splunk. They're more willing to work with you and be customer-focused than a Splunk is, for sure. And that's the same with QRadar or any other big ones. That's a plus.
What other advice do I have?
Be very realistic about what you want to send into it and make sure that you have use cases for sending data to it, but that's the same anywhere. One of the problems that a lot of people have is that with the old SIEM you sent all of your data and then figured out a use case for it afterwards. I'm much more of a firm believer in figuring out the use cases and then sending the data.
Make sure you have the data you're going to be shipping into it well documented. Don't, by default, take everything you're shipping in your SIEM and ship it to Devo. That's probably not the best use of your time.
Also, really start thinking about complex use cases, things like "If A and B and C happened, but A, B, and C are on different data sources, then tell me that there's a problem." That's not something you used to be able to do on a traditional SIEM, or at least not very effectively. So start thinking about the more complex data analytics use cases to improve your learning and your logic. That's really the power of Devo.
It's pretty easy to use. My guys have had no problem getting up to speed on it. I wouldn't say it's easier to use than some of the others, but it's as easy to use. Once you learn the language, you can start writing the rule sets, and you can actually have the GUI show you the language it is using. So, we have had no issues in that regard. It's well-documented.
The trending we're interested in is not the 400-day rolling window that Devo provides. We use a six-month rolling window for audit and/or investigative purposes. If we find something, we can go back and look at it very quickly to see how long it has been happening in our environment. We haven't really been historically trending over more than six months. Eventually we may expand into using the 400 days, but right now we're focused more on blocking and tackling, which requires shorter windows.
Overall, I have no issues with it and my guys love it.
Devo is what we thought it would be when we bought it. It's basically a high-speed analytics engine that allows us to query our data at speed and scale, and combine it together. That was the whole purpose, and it is what it is. We had a very mature idea of what we wanted when we went looking.
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.