I rate Gurucul Next-Gen SIEM eight out of 10. I would recommend Gurucul to anyone because it provides almost all the SIEM features offered by the leaders at a low cost. You can achieve the sophistication of leaders like Spunk.
Founder and CEO at Woodside Security Consultants, LLC
Reseller
Top 10
2023-10-04T20:24:00Z
Oct 4, 2023
Gurucul Next Gen SIEM is a seven out of ten for me, while all other SIEMs are five or fewer. I'm a tough grader. I don't have enough time spent with Gurucul yet to give them a higher score, but I expect them to earn it. I believe in their technology and in the work we're doing together, so I'm confident that their score will improve. However, I'm not easy to please. I don't give A's unless they're earned. Gurucul hasn't had a chance to earn an A with me yet. Gurucul has not yet provided us with context-driven risk prioritization, due to our implementation of it. We have a lot of work to do. With any SIEM, the first step is to discover what is happening, make decisions about what it all means, and then determine what actions to take. The primary goal of using a tool like Oracle is to get accurate information so we can continue that analysis and determine our reactions. In some cases, there are obvious reactions, such as if we detect a user logging in from a country they have never logged in from before, at the opposite side of the clock, and they are failing their password or two-factor authentication. This log event is essentially saying, "Pay attention to this!" These types of events generate their own priority because of the type of attack they represent. As we add discovered new events to a category, we can say that we want an alert whenever we see them again. And if we see a user doing a certain thing and failing their password, we can disable the account. We can create an automated reaction to an alert. The action itself is not taken by the Gurucul SIEM, but by a system that we can program using the Gurucul SIEM to send a message to either a human being to disable the account, or to send a command to an automated system, such as an IAM controlling system, to disable the account. All of this is possible, and it is an integration beyond just discovering the alert. We use Datadog, a traditional SIEM, and Gurucul Next Gen SIEM. I have run and examined many SIEMs in my career. I think that everyone in the SIEM business hopes that they are not missing much, but we know that we are. For security professionals like me, this is a constant worry. This does not mean that I am snoozing all day and eating ice cream because I am tough. It does mean that we expect to have more complete information and better visibility and to pay closer attention to anomalies because they stand out. In a traditional SIEM, if we do not have a rule, the system cannot alert us. We do not receive an anomaly alert, or any alert at all. Without a rule, the event is simply stored in the database, and the traditional SIEM has no reason to tell us about something for which there is no rule. The idea here is that we are moving away from having to predict the future and write rules ahead of time in order to see everything that happens. We all know that this is impossible in a traditional SIEM. Instead, we are analyzing all of the events that come across and determining whether the system recognizes them as normal behavior or as anomalies. We then have to decide what to do with these anomalies. When we first implement a system like this, there is a significant amount of work involved in identifying the things that are not on our normal baseline. We need to see these anomalies and decide whether they belong in the normal baseline or whether they should remain in an alert state. This is essentially a binary choice: either the event is an alert abnormality or it is not. Even if it is brand new, it may simply be a new application that has been applied, a new user who has joined our organization, or a new two-factor authentication mechanism that we are using. All of these things will appear as anomalies because the log stream will be different. When we have the opportunity to look at these anomalies, we then decide what they mean for our organization. This is work, and there is no getting around it. But by doing this work, we know that we are not missing that one event that could cost us our jobs or the jobs of everyone in our firm, or that could lead to a public disaster. If MGM had a system like this running when the person who broke into their systems and launched the ransomware attack used an account belonging to a real employee, they could have seen the anomaly of the person using those credentials from somewhere outside of MGM. They could have quarantined the account and prevented it from doing anything, and they could have defended themselves against the attack. MGM is definitely running an SIEM, but if it is not an SIEM that can catch anomalies and is only reacting to rules that have been written ahead of time, then it would have missed this attack anyway. When implementing Gurucul, our first priority is to be prepared to get our logs into the SIEM right away. This is the most important thing until the SIEM is working because there is nothing to tune until it is actually seeing our log streams, examining the traffic, and doing the analysis. Any preparation and scheduling we can do with Gurucul to successfully get our logs into their system is absolutely essential. Once we have the logs in the SIEM, it is difficult to say which company will have an easier time tuning their SIEM. It depends on the number of unique events. For example, if our company only does one type of business with a simple set of operations, and has only a few authentication and login points and administrative tasks, tuning the SIEM would be relatively straightforward and could be done in a matter of weeks. However, if we are running a more complex environment with an SIEM, such as a typical Windows enterprise with on-premises and cloud systems for every possible system, like, email, file storage, applications, etc., tuning will take time and scale depending on how many people we have to review those systems, analyze the data that comes through, and determine how to handle it in our documentation. Tuning a SIEM is like tuning a guitar or piano. With a guitar, we have six strings to tighten or loosen to get them to vibrate at the right frequency. With a piano, there are 88 keys with three strengths for each key. The more strings or keys we have, the longer it takes to tune the instrument. Things to consider before implementing an SIEM are, Do we have a normal baseline for logins? For example, do our people all show up at 9 AM every day of the workweek, and do they all log on to the domain authentication system and eight applications between 9 AM and 5 PM? How complex is our application structure? The more complex it is, and the more unique things our people do by department and/or user, the less we can predict what their login activity will look like. The challenge of traditional SIEM is that traditional SIEM relies on parsing rules. This means that we need to be able to anticipate all of the different ways that our users might log in and interact with our applications in order to write effective rules. This is difficult, especially in complex environments with custom applications, and traditional SIEM is not good at detecting anomalies. Anomalies are events that deviate from our normal baseline activity. Traditional SIEM is typically configured to look for specific known threats, so it may miss anomalies that are not on its radar. The Benefits of next-gen SIEM are that next-gen SIEM can detect anomalies without the need for parsing rules. This means that it can be effective in complex environments with custom applications, and next-gen SIEM can be more effective at detecting novel threats. Novel threats are threats that have not been seen before. Next-gen SIEMs can use machine learning and other techniques to identify anomalies that may be indicative of novel threats. Next-gen SIEM is a better choice for complex environments with custom applications and is also better at detecting anomalies and novel threats. The good news about having a SaaS provider is that there is not much maintenance required on my side. However, if I send them new log sources, I will trigger some of the things that they have to do. For example, if I give Gurucul all of the applications I want them to monitor 24/7, and six months later, I have a new application, and we're going to send the logs from that one, they will have to do some maintenance. The logs from applications versus systems of the same type are significantly different. For example, if everything ran on a Windows server, then all of the events from the Windows server would be the same because they're a Windows server. However, in an application like a cartoon application, what the user does by calling shapes and colors and drawing and all of the things that we would do to make a cartoon are going to be significantly different than an accounting application. And so the log streams from those two things aren't from an application perspective and are not going to look the same at all. And so every time we introduce change, there's maintenance to do. For both sides, one is that the log stream from that particular application has to be analyzed to make sure that the language being used is understood. And that the mechanism to transport that language and log from the system we're monitoring into the SIEM service is successful. At the end of the day, the difference between traditional rule-based SIEM and next-gen SIEM is the difference between signature-based antivirus and XDR. Back in the day when we were scanning files for known viruses, we could find some viruses because they were identified by an easy-to-understand hash that was applied to virus files. When the bad guys started adding characters to a known virus file and therefore changing the signature or hash, we could not discover those viruses anymore. This is because the thing we are expecting isn't there anymore. As malware has evolved, we now have polymorphic files. But most of those are simply a way to get our machine in contact with a command and control system that will then...
Gurucul Next Gen SIEM is used for threat detection and response, leveraging machine learning to identify anomalies and breaches. It provides advanced analytics, security event investigation, and compliance management.
Organizations use Gurucul Next Gen SIEM primarily for its robust capabilities in threat detection and response. Its machine learning algorithms effectively identify anomalies and potential breaches, making it a key tool for preventing insider threats. The platform features...
I rate Gurucul Next-Gen SIEM eight out of 10. I would recommend Gurucul to anyone because it provides almost all the SIEM features offered by the leaders at a low cost. You can achieve the sophistication of leaders like Spunk.
Gurucul Next Gen SIEM is a seven out of ten for me, while all other SIEMs are five or fewer. I'm a tough grader. I don't have enough time spent with Gurucul yet to give them a higher score, but I expect them to earn it. I believe in their technology and in the work we're doing together, so I'm confident that their score will improve. However, I'm not easy to please. I don't give A's unless they're earned. Gurucul hasn't had a chance to earn an A with me yet. Gurucul has not yet provided us with context-driven risk prioritization, due to our implementation of it. We have a lot of work to do. With any SIEM, the first step is to discover what is happening, make decisions about what it all means, and then determine what actions to take. The primary goal of using a tool like Oracle is to get accurate information so we can continue that analysis and determine our reactions. In some cases, there are obvious reactions, such as if we detect a user logging in from a country they have never logged in from before, at the opposite side of the clock, and they are failing their password or two-factor authentication. This log event is essentially saying, "Pay attention to this!" These types of events generate their own priority because of the type of attack they represent. As we add discovered new events to a category, we can say that we want an alert whenever we see them again. And if we see a user doing a certain thing and failing their password, we can disable the account. We can create an automated reaction to an alert. The action itself is not taken by the Gurucul SIEM, but by a system that we can program using the Gurucul SIEM to send a message to either a human being to disable the account, or to send a command to an automated system, such as an IAM controlling system, to disable the account. All of this is possible, and it is an integration beyond just discovering the alert. We use Datadog, a traditional SIEM, and Gurucul Next Gen SIEM. I have run and examined many SIEMs in my career. I think that everyone in the SIEM business hopes that they are not missing much, but we know that we are. For security professionals like me, this is a constant worry. This does not mean that I am snoozing all day and eating ice cream because I am tough. It does mean that we expect to have more complete information and better visibility and to pay closer attention to anomalies because they stand out. In a traditional SIEM, if we do not have a rule, the system cannot alert us. We do not receive an anomaly alert, or any alert at all. Without a rule, the event is simply stored in the database, and the traditional SIEM has no reason to tell us about something for which there is no rule. The idea here is that we are moving away from having to predict the future and write rules ahead of time in order to see everything that happens. We all know that this is impossible in a traditional SIEM. Instead, we are analyzing all of the events that come across and determining whether the system recognizes them as normal behavior or as anomalies. We then have to decide what to do with these anomalies. When we first implement a system like this, there is a significant amount of work involved in identifying the things that are not on our normal baseline. We need to see these anomalies and decide whether they belong in the normal baseline or whether they should remain in an alert state. This is essentially a binary choice: either the event is an alert abnormality or it is not. Even if it is brand new, it may simply be a new application that has been applied, a new user who has joined our organization, or a new two-factor authentication mechanism that we are using. All of these things will appear as anomalies because the log stream will be different. When we have the opportunity to look at these anomalies, we then decide what they mean for our organization. This is work, and there is no getting around it. But by doing this work, we know that we are not missing that one event that could cost us our jobs or the jobs of everyone in our firm, or that could lead to a public disaster. If MGM had a system like this running when the person who broke into their systems and launched the ransomware attack used an account belonging to a real employee, they could have seen the anomaly of the person using those credentials from somewhere outside of MGM. They could have quarantined the account and prevented it from doing anything, and they could have defended themselves against the attack. MGM is definitely running an SIEM, but if it is not an SIEM that can catch anomalies and is only reacting to rules that have been written ahead of time, then it would have missed this attack anyway. When implementing Gurucul, our first priority is to be prepared to get our logs into the SIEM right away. This is the most important thing until the SIEM is working because there is nothing to tune until it is actually seeing our log streams, examining the traffic, and doing the analysis. Any preparation and scheduling we can do with Gurucul to successfully get our logs into their system is absolutely essential. Once we have the logs in the SIEM, it is difficult to say which company will have an easier time tuning their SIEM. It depends on the number of unique events. For example, if our company only does one type of business with a simple set of operations, and has only a few authentication and login points and administrative tasks, tuning the SIEM would be relatively straightforward and could be done in a matter of weeks. However, if we are running a more complex environment with an SIEM, such as a typical Windows enterprise with on-premises and cloud systems for every possible system, like, email, file storage, applications, etc., tuning will take time and scale depending on how many people we have to review those systems, analyze the data that comes through, and determine how to handle it in our documentation. Tuning a SIEM is like tuning a guitar or piano. With a guitar, we have six strings to tighten or loosen to get them to vibrate at the right frequency. With a piano, there are 88 keys with three strengths for each key. The more strings or keys we have, the longer it takes to tune the instrument. Things to consider before implementing an SIEM are, Do we have a normal baseline for logins? For example, do our people all show up at 9 AM every day of the workweek, and do they all log on to the domain authentication system and eight applications between 9 AM and 5 PM? How complex is our application structure? The more complex it is, and the more unique things our people do by department and/or user, the less we can predict what their login activity will look like. The challenge of traditional SIEM is that traditional SIEM relies on parsing rules. This means that we need to be able to anticipate all of the different ways that our users might log in and interact with our applications in order to write effective rules. This is difficult, especially in complex environments with custom applications, and traditional SIEM is not good at detecting anomalies. Anomalies are events that deviate from our normal baseline activity. Traditional SIEM is typically configured to look for specific known threats, so it may miss anomalies that are not on its radar. The Benefits of next-gen SIEM are that next-gen SIEM can detect anomalies without the need for parsing rules. This means that it can be effective in complex environments with custom applications, and next-gen SIEM can be more effective at detecting novel threats. Novel threats are threats that have not been seen before. Next-gen SIEMs can use machine learning and other techniques to identify anomalies that may be indicative of novel threats. Next-gen SIEM is a better choice for complex environments with custom applications and is also better at detecting anomalies and novel threats. The good news about having a SaaS provider is that there is not much maintenance required on my side. However, if I send them new log sources, I will trigger some of the things that they have to do. For example, if I give Gurucul all of the applications I want them to monitor 24/7, and six months later, I have a new application, and we're going to send the logs from that one, they will have to do some maintenance. The logs from applications versus systems of the same type are significantly different. For example, if everything ran on a Windows server, then all of the events from the Windows server would be the same because they're a Windows server. However, in an application like a cartoon application, what the user does by calling shapes and colors and drawing and all of the things that we would do to make a cartoon are going to be significantly different than an accounting application. And so the log streams from those two things aren't from an application perspective and are not going to look the same at all. And so every time we introduce change, there's maintenance to do. For both sides, one is that the log stream from that particular application has to be analyzed to make sure that the language being used is understood. And that the mechanism to transport that language and log from the system we're monitoring into the SIEM service is successful. At the end of the day, the difference between traditional rule-based SIEM and next-gen SIEM is the difference between signature-based antivirus and XDR. Back in the day when we were scanning files for known viruses, we could find some viruses because they were identified by an easy-to-understand hash that was applied to virus files. When the bad guys started adding characters to a known virus file and therefore changing the signature or hash, we could not discover those viruses anymore. This is because the thing we are expecting isn't there anymore. As malware has evolved, we now have polymorphic files. But most of those are simply a way to get our machine in contact with a command and control system that will then...