What is our primary use case?
The key challenge we face is visibility, things that happen in isolated and pocketed environments where visibility is limited. Silos and isolated networks exist across the environment, and it's difficult to control it completely. Blind spots are the main challenges.
How has it helped my organization?
With this solution, the focus has changed from reactive to more proactive, because all the other SOAR and EDR solutions, firewalls, and IPSs are generally reactive. With those tools, when most things are triggered, it means you are already slightly late. With Vectra, we become more proactive than reactive. More often than not, we pick things up before the actual damage can start. It picks up things that none of our other tools pick up because it's designed to detect things before harm is done, at the initial stages. This is one of the main benefits and the biggest business justification and use case for us.
It reduces the time it takes to respond to attacks because we find out about a threat in the beginning so we can stop it before it can cause harm, rather than reacting when the damage is done and significantly more effort is needed.
And since it is not preventive, it does not trigger any adverse reactions. For example, sometimes we have seen, with certain kinds of malware or ransomware, that they tend to get more aggressive if they realize that something is stopping them, but that doesn't happen with detection tools like Vectra.
For capturing network metadata at scale and enriching it with security information, that's where the second product comes in, Cognito Recall. It takes enriched network metadata and keeps that information available for you to access, whether it triggers a detection or not. For example, if you want to check who is using SSL version 3, TLS version 1.0, SNMP version 1, SNMP version 2, or who is using clear text passwords, even though they don't trigger a detection in Cognito Detect, that metadata is available. Of course, the duration of that data is dependent on how much storage we can buy from Vectra. That's a financial constraint and we have opted for one month. We might look at expanding that further.
That metadata helps in closing vulnerabilities. For instance, if there is a TLS version or an encryption level that we want to deprecate, it is very useful for us, because we can also generate reports. We know which systems are using SNMP version 1 or SNMP version 2. Even though it has more features and you can create custom detections through Recall, we've not gone that far. For us, this has been our most common use case: protocols and communications that we would like to stop or close. This provides useful data.
The solution also provides visibility into behaviors across the full lifecycle of an attack, beyond just the internet gateway. It provides the whole MITRE Framework and the key chain—recon, command and control. It has detections under each of those categories, and it picks them up within the network. In fact, most of the detections are internal. Internet-based detections comprise 25 to 30 percent, and those are based on encrypted traffic. And most of the time when we validate, we see that it's genuine because it's a call from a support vendor where large files need to be uploaded. That gives us an opportunity to validate with that end-user as well: What was happening, what did you transfer?
We used to have SIEM and antivirus solutions and we would get a lot of alerts. Those alerts resulted in a lot of effort to refine them and yet we still needed a lot of effort to analyze the information. Vectra does all of that automatically for us, and what it produces, in the end, is something that can easily be done by one person. In fact, you don't even need one.
What is most valuable?
The most useful feature is the anomaly detection because it's not signature-based. It picks up the initial part of any attack, like the recon and those aspects of the kill chain, very well. We've had numerous red team and penetration exercises and, at the initial stage, when the recon is happening and credentials are used and lateral movement is attempted, our existing tools don't pick it up because it has not yet been "transformed" into something malicious. But Vectra, at that stage, picks it up 80 to 90 percent of the time. That has been one of the biggest benefits because it picks up what other things don't see, and it picks them up at the beginning when attackers are trying to do something rather than when the damage is already done.
The ability to roll up numerous alerts to create a single incident or campaign for investigation takes a bit of effort in the beginning because you'll always have misconfigurations, such as wrong passwords, that could trigger brute force and SMB-types of alerts. And you'll have genuine behaviors in your environment that tend to be suspicious, such as vulnerability assessment and scanning tools, that are not noise, per se. Even if they're non-malicious, it always tends to point to events like misconfigurations and security tools. It's been very useful in that sense, in that, once we do the initial triaging, indicating that this is a security tool, or that is a misconfiguration we need to correct, it reduces the noise quite significantly. We don't get more than 10 to 20 events, maximum, generated per day.
Vectra shows what it does in terms of noise reduction, and we can see that it is down to only 1 percent, and sometimes even less than 1 percent, of what actually requires a person to act on.
It becomes quite easy for a SOC analyst to handle things without being overburdened. And, obviously, it's at the initial stage because it picks things up before the damage happens. It's not the kind of prevention tool that has signatures and that only tells you something bad has already happened. It tells you that something is not right or is suspicious. It says there is a behavior that we have not seen before, and it has always been effective in the red team exercises that we periodically conduct.
Also, we have privileged account management, but we don't have a separate analytics tool. Still, Vectra also picks that up. This is also something that has come up during red team exercises. If there's an account that is executing an escalated privilege or running a service that it normally doesn't run, it gets flagged. It tells us about lateral movements and privilege escalations; things that constitute non-standard usage. It's quite effective at catching these. I have yet to see a red team exercise that doesn't generate any alerts in Vectra. We see a jump, and it's very easy to identify the account and the system that is the source.
It also triages threats and correlates them with the compromised host devices, because it maps both ways. It maps the host, the account, and the detection, and vice versa. You can also go to the detection and see how many affected hosts there are. In addition, if there's a particular detection, is there an existing campaign? How many hosts are also doing the same thing? These are the kinds of visibility the tool provides.
What needs improvement?
The reporting from Cognito Detect is very limited and doesn't give you too many options. If I want to prepare a customized report on a particular host, even though I see the data, I have to manually prepare the report. The reporting features that are built into the tool are not very helpful. They are very generic and broad. That's one main area that I keep telling Vectra they need to improve.
Also, whenever there's a software upgrade and new detections are introduced and the intelligence improves, there is a short period at the beginning where there's a lot of noise. Suddenly, you will get a burst of detections because it's a new detection. It's a new type of intelligence they've introduced and it takes some time to learn. We get worried and we always check whether an upgrade has happened. Then we say, "Okay, that must be the reason." I would like to see an improvement wherein, whenever they do an upgrade, that transition is a bit smoother. It doesn't happen all the time, but sometimes an upgrade triggers noise for some time until it settles down.
For how long have I used the solution?
We've been using the Vectra AI for over three years.
What do I think about the stability of the solution?
In the beginning, there is a struggle to fine-tune it because it will generate noise for the reasons I mentioned. But once that learning phase is complete, it's quite reliable. We have been using the hardware for more than three years and there have been no failures or RMAs
Upgrades happen automatically. We have never gone into the appliance to do an upgrade, even though it's on-prem. It all happens automatically and seamlessly in the background.
Initially, we had some problems with the Recall connection to the cloud, to establish the storage connectivity. But again, these kinds of things are at the beginning. After that, it is quite stable. We've not had any problems.
What do I think about the scalability of the solution?
Scalability for the cloud solution is straightforward. For the on-prem solution, you need to take care of the capacity and the function itself, because the capacity of the same hardware varies, depending on what you use it for. From a capacity point of view, there is some effort required in the design.
Looking forward to the future, the tool integrates with more and more solutions outside of its existing intelligence. It's not something that we have yet embarked on, but that's an interesting area in which we would like to invest some time.
The cloud solution is something that has limited visibility because PaaS and SaaS in the cloud are always a challenge in terms of cyber security. And in the future, even though we have taken the Vectra SaaS for O365, they're also coming up with a PaaS visibility tool. It is currently under testing, and we are one of the users that have been chosen to participate in the beta testing of that. That's another thing in the future that would add a lot of value in terms of visibility.
Currently, we have about 8,000 users.
How are customer service and support?
The support is directly from the device or we get a response via email. The response is okay. Because the product is stable, we have not been in a situation where we urgently needed something and we wanted support right now. We have never tested that kind of fast response. They take some time to respond, but whenever we have requested something, it has not been urgent.
We do get a response and issues always get resolved. We haven't had any lingering issues. They have all been closed.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We did not have any tools in the same league. We had security tools, but not with anomaly detection as part of the feature set.
How was the initial setup?
Cognito Detect is on-prem and Cognito Recall is in the cloud, as is the O365 and Azure AD protection.
The cloud setup is extremely simple. The on-prem takes some effort. There is the sizing, depending on what model. The throughput varies. Those kinds of on-prem design considerations create a bit of complexity in the beginning, but the cloud is straightforward. All it needs is the requisite access to the tenant. Once it gets that, it starts its work.
In the beginning, there is some effort in fine-tuning things, but that comes as part of the package with the solution. They have a success manager and tech analyst assigned to support you in the beginning. Once that is done, the product is very stable.
For us, there were an initial four to eight weeks of triaging and clearing the noise, in terms of misconfiguration issues or known security tools. After that time, we started seeing value.
What about the implementation team?
We only used the people from Vectra.
What's my experience with pricing, setup cost, and licensing?
Vectra is a bit on the higher side in terms of price, but they have always been transparent. The reason that they are this good is that they invest, so they need to charge accordingly. They are above average when it comes to price. They're not very economical but it's for a good reason. As long as we get quality, we are okay with paying the extra amount.
Which other solutions did I evaluate?
We did a PoC with Darktrace recently as part of our regular exercise of giving other solutions an opportunity, but the PoC didn't meet our requirements. It didn't detect what Vectra detects in a red team situation.
The deployment time is similar because they all need the same thing. They need the network feed for a copy of the network traffic. The base requirements are the same.
What other advice do I have?
My advice is that you need to size it right and identify what your capacity will be. And you need to place it right, because it's as helpful as what it can see, so you need to have an environment that supports that. What we did, as part of implementing Vectra, was implement an effective packet broker solution in our environment. It needs that support system to function properly. It needs copies of your traffic for detection because it doesn't have an agent sitting anywhere. The positioning and packet brokering are critical allies for this solution.
We have it deployed on-premises. However, we are in the process of acquiring O365 and Azure AD as well. When it comes to Power Automate and other deeper anomalies, these are things that we have on the cloud in Azure. In the new module, it lets us know if any automation, scripts, or large, sudden downloads, or access from a country that is different from where the user has normally been, are happening. But this is a very new tool. We are yet to familiarize ourselves with it and do the fine-tuning. We don't have any automation or any such functions happening on-prem.
In terms of correlating behaviors in the enterprise network and data centers with behaviors in the cloud environment, because we have taken the O365 module, it gives us good correlation between an on-prem user and his behavior in the cloud. We have seen that sometimes it detects that an account is disabled, for example, on-prem, and it says somebody downloaded a lot of data just a few days before that or uploaded large data a few days before that. It does those kinds of correlations.
We have one SOC but it's based overseas. It's an offsite managed service and it covers the gambit of incident detection and response. It's an always-available service. The SIEM we are using is RSA NetWitness, and the EDR solution we use is McAfee.
Vectra has some automation features, in the sense of taking action through the firewalls or other integrations, but that's a journey that we have not yet embarked on. As long as we have a continuously available SOC that rapidly responds to the alerts it generates, we are okay. In general, I'm not comfortable with the automation part. Accurate detection is more important for me. Prevention, when something is picked up too late, as is the case with some of the other solutions I mentioned, is a different case. But here, when it is at the preliminary stage, prevention seems a bit too harsh.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Good