I don't have deep enough knowledge to comment on what requires improvements in Plixer Scrutinizer. In Plixer Scrutinizer, scalability is an area with minor concerns where improvements are required.
Though Plixer Scrutinizer has network detection and response, it's an area that needs just a little more rounding out. Another room for improvement in the solution is its lack of SaaS offering which some customers were looking for. My company deals in small to medium businesses, mid-market, and some customers wanted the SaaS feature which Plixer Scrutinizer doesn't offer. What I'd like to see in the next release of the solution is for it to have a SaaS offering because my company also deals with educational spaces and smaller businesses that just don't have the staff that can implement this. If there's either a managed service or SaaS-based offering to just make it a little easier for those types of customers, it would be a great addition to Plixer Scrutinizer.
I would recommend having more data points. Plixer Scrutinizer cannot handle high traffic volumes. This is NetFlow Analyzer, and the number of data points, or the massive volume of information is stored. There are numerous processes running inside a router. As a result, a massive amount of data is being logged in this Plixer Scrutinizer. It is my understanding that when the flows are too high, the solution cannot handle them, and it is not simply a matter of scaling it up. For example, on ACI, you can define Cisco ACI core switches, and it is found that on Plixer Scrutinizer they are unable to handle the traffic volume. When I'm talking about a core switch or another switch that has a large amount of traffic flowing through it, the solution is also incapable of handling it. In terms of data aggregation and storage, while I was not managing it, one of the feedbacks, specifically to this solution. I can't comment further on the technical side of things, but from a user standpoint, the team was only keeping the real-time log for one day. Then, for three days, it switches to five-minute aggregation. It switches to one-hour aggregation for one week. For one month, it will be aggregated on a daily basis, to save space. It aggregates the data points and removes the individual real-time data points before reducing the data points to conserve storage. I would suggest an improvement in data storage, or how the data is archived and sold so that enterprises have more room to keep data for a longer period of time with less aggregation. It will be kept for real-time value for a month, rather than just one day.
When you download Windows 10 and first log in, it says something like, "Welcome. We're setting up a few things, we'll be right with you. We're going to customize some things and get it going for you." Then, it just loads to a desktop and nothing else happens. You don't have the applications installed. You don't have any customization, it's just a default setup. That's essentially what we had. We had a default setup. We were trying to set up some configuration, but it just wasn't quite working properly. We couldn't get it set up properly. We had multiple meetings. They apparently noted down what I was asking for, but we just went back and forth, and we just couldn't get the thing to work or configure properly. Those discussions were with their sales guy and their sales engineer. They set up a demo for me. They were working with me to try to set up some configurations — some customization within it. It wasn't very intuitive. They gave me documentation, that wasn't very user-friendly. They just didn't seem to understand what I was trying to do. So we just went back and forth, back and forth. It was like calling McDonald's and asking for a cheeseburger, and they give you some chicken nuggets. I'd say, "This isn't working for me. I want a hamburger. You gave me chicken nuggets". I would ask for this and they'd give me something else that didn't make any sense. After multiple meetings, eventually, I was like, "I'm done." Then I started looking at Awake Security and started looking at some other MTA's out there.
Network Engineer at a healthcare company with 1,001-5,000 employees
Real User
2020-01-12T07:22:00Z
Jan 12, 2020
I wish the reporting side was easier to work with, but it does a decent job. I also wish the reporting side was a little more intuitive or they offered more reporting examples. Their user videos could be a little better. They provided me a couple of training videos, but they were very generic in nature. E.g., if they had training videos specific to Cisco or Palo Alto firewall to give training to show you specifically within Scrutinizer what you could be looking at. They did provide a basic and an advanced training video. However, even the advanced training video doesn't break down into detail, and on the configuration side, that would be nice.
Network Infrastructure at a tech vendor with 1,001-5,000 employees
Real User
2020-01-09T06:16:00Z
Jan 9, 2020
It would be useful if there was a way to back up the configuration information. E.g., if you wanted to deploy a new instance or disaster recovery, you could quite easily deploy and restore the config, as opposed to having to restore all the NetFlow data. If there was just a button that said "backup config information", that would be good.
The visual acuity of how it presents data can sometimes be confusing. It takes a bit for people to spin up how to look at the graphs. It's how the graphs are displayed and how busy the information is. When you first take a glance at anything that's displayed, other than just the single line drawings, there is a lot of information displayed. It can be overwhelming if you're not used to it. In a lot of cases, a product like this only gets looked at when there's a report of a problem. It's not an everyday tool. Thus, most people don't get used to it.
Network Manager at a energy/utilities company with 5,001-10,000 employees
Real User
2019-12-12T07:48:00Z
Dec 12, 2019
We have tried to extract a map of data flow information, but I think we have to use a JSON query with API in order to query Scrutinizer to pull out some information in order to make some correlation with other third-party tools. We never had the opportunity to do this. It is something that would be nice to do, but it's very labor intensive. I really would like to exploit the metadata to match it with other applications using the API, but this is not yet available. I'm not sure that we'll go that way because all the work that we have to do in order just to extract the metadata from Scrutinizer. We'll have to correlate with all the information from other systems. For that reason, I'm not sure it's going to happen. It will be very interesting though. I would like them to improve the update process. It's so complicated now that it switched to Linux. This makes the server more stable because before we were running it on Windows. The fact that they use Linux is very good and makes it more stable. However, updates never happen in one day or on our own. So, every time we need to call Plixer to proceed with the update, and they are very efficient in that. However, if they could make it a bit easier to upgrade, e.g., a click from the web interface to update the system, this would be nice. For updating the Scrutinizer platform, when we have the actual data, it never happens in one day. Every time we have the data, we are obliged to install a new server in order to integrate the old data, and every time it has a problem. Most of the time, we were obliged to scrap all the data because we couldn't transfer it to the new server. So, it would be very good if they could improve this part. Concerning the NetFlow, we have encountered many issues with some routers that don't send proper tickets. All the time, we're obliged to logon to SSH and run pcap. Pcap is just the packet capture. We are obliged to enter into the Linux to run some pcap on the common line, which is not great. It would be very nice if they integrated the pcap features through the web in order to analyze them. It's very easy. Most of the tools that we're using, and that are on the market, provide this feature. It would be great if Plixer integrated the pcap functionality through the web interface without having to enter into the Linux system. The security part could also be improved. It would be great if they could implement a better algorithm inside the Scrutinizer to detect if there were attacks. The current algorithm to check if there has been a DNS attack is very light.
There is room for improvement around the data that they have on the website about solutions. I understand that putting a particular appliance into any given organization is going to bring its own challenges — and Plixer does do a good job of blogging it — but they should have more templated solutions on their website. Going out and identifying how to do RTP performance with a Cisco router, or how to do application response times in an Arrista data center deployment was where most of the work was. We had to identify the end-vendor's configuration where Scrutinizer worked. They should spend some more time documenting solutions and putting together white papers.
I would like to see a better user interface for creating what they call maps. The solution creates a visual map of a particular location and how the network flows. You need to spend time to generate all those maps. If they could figure out a way to reduce the time needed to generate the maps, that would be great.
Business Security Officer at a insurance company with 1,001-5,000 employees
Real User
2019-11-13T05:29:00Z
Nov 13, 2019
Knowing that they're coming out with a new user interface, that is an area where there is room for improvement. There are so many variables. They should limit the variables in the user interface and create some classes, like "simple," "novice," and "expert" to narrow down the variables within it.
They're working on the security areas, so it can provide more insight. What they have is still pretty much IP-concentric. If they were to make it IP and URL, they'd be a little bit ahead on that. I'm not sure exactly where they're at on that topic.
Networks BAU Lead at a consultancy with 51-200 employees
Real User
2019-11-07T10:35:00Z
Nov 7, 2019
One of the areas that needs to be looked at is how the databases are created and managed, because they are collecting a massive amount of data. It's a big-data model. The reporting structure, the front-end GUI, also needs some work. It needs some getting used to. It works fairly well, but it's a technical tool rather than a user tool. You have to understand the structure of the databases before you can really use it. Work is needed on how the front-end user-tool accesses the data and what decisions it makes in terms of accessing that data to get you the response that you need.
The Scrutinizer incident response system leverages network traffic analytics to provide active monitoring, visualization, and reporting of network and security incidents. The system quickly delivers the rich forensic data needed by IT professionals to support fast and efficient incident response.
I don't have deep enough knowledge to comment on what requires improvements in Plixer Scrutinizer. In Plixer Scrutinizer, scalability is an area with minor concerns where improvements are required.
Though Plixer Scrutinizer has network detection and response, it's an area that needs just a little more rounding out. Another room for improvement in the solution is its lack of SaaS offering which some customers were looking for. My company deals in small to medium businesses, mid-market, and some customers wanted the SaaS feature which Plixer Scrutinizer doesn't offer. What I'd like to see in the next release of the solution is for it to have a SaaS offering because my company also deals with educational spaces and smaller businesses that just don't have the staff that can implement this. If there's either a managed service or SaaS-based offering to just make it a little easier for those types of customers, it would be a great addition to Plixer Scrutinizer.
I would recommend having more data points. Plixer Scrutinizer cannot handle high traffic volumes. This is NetFlow Analyzer, and the number of data points, or the massive volume of information is stored. There are numerous processes running inside a router. As a result, a massive amount of data is being logged in this Plixer Scrutinizer. It is my understanding that when the flows are too high, the solution cannot handle them, and it is not simply a matter of scaling it up. For example, on ACI, you can define Cisco ACI core switches, and it is found that on Plixer Scrutinizer they are unable to handle the traffic volume. When I'm talking about a core switch or another switch that has a large amount of traffic flowing through it, the solution is also incapable of handling it. In terms of data aggregation and storage, while I was not managing it, one of the feedbacks, specifically to this solution. I can't comment further on the technical side of things, but from a user standpoint, the team was only keeping the real-time log for one day. Then, for three days, it switches to five-minute aggregation. It switches to one-hour aggregation for one week. For one month, it will be aggregated on a daily basis, to save space. It aggregates the data points and removes the individual real-time data points before reducing the data points to conserve storage. I would suggest an improvement in data storage, or how the data is archived and sold so that enterprises have more room to keep data for a longer period of time with less aggregation. It will be kept for real-time value for a month, rather than just one day.
When you download Windows 10 and first log in, it says something like, "Welcome. We're setting up a few things, we'll be right with you. We're going to customize some things and get it going for you." Then, it just loads to a desktop and nothing else happens. You don't have the applications installed. You don't have any customization, it's just a default setup. That's essentially what we had. We had a default setup. We were trying to set up some configuration, but it just wasn't quite working properly. We couldn't get it set up properly. We had multiple meetings. They apparently noted down what I was asking for, but we just went back and forth, and we just couldn't get the thing to work or configure properly. Those discussions were with their sales guy and their sales engineer. They set up a demo for me. They were working with me to try to set up some configurations — some customization within it. It wasn't very intuitive. They gave me documentation, that wasn't very user-friendly. They just didn't seem to understand what I was trying to do. So we just went back and forth, back and forth. It was like calling McDonald's and asking for a cheeseburger, and they give you some chicken nuggets. I'd say, "This isn't working for me. I want a hamburger. You gave me chicken nuggets". I would ask for this and they'd give me something else that didn't make any sense. After multiple meetings, eventually, I was like, "I'm done." Then I started looking at Awake Security and started looking at some other MTA's out there.
I wish the reporting side was easier to work with, but it does a decent job. I also wish the reporting side was a little more intuitive or they offered more reporting examples. Their user videos could be a little better. They provided me a couple of training videos, but they were very generic in nature. E.g., if they had training videos specific to Cisco or Palo Alto firewall to give training to show you specifically within Scrutinizer what you could be looking at. They did provide a basic and an advanced training video. However, even the advanced training video doesn't break down into detail, and on the configuration side, that would be nice.
It would be useful if there was a way to back up the configuration information. E.g., if you wanted to deploy a new instance or disaster recovery, you could quite easily deploy and restore the config, as opposed to having to restore all the NetFlow data. If there was just a button that said "backup config information", that would be good.
The visual acuity of how it presents data can sometimes be confusing. It takes a bit for people to spin up how to look at the graphs. It's how the graphs are displayed and how busy the information is. When you first take a glance at anything that's displayed, other than just the single line drawings, there is a lot of information displayed. It can be overwhelming if you're not used to it. In a lot of cases, a product like this only gets looked at when there's a report of a problem. It's not an everyday tool. Thus, most people don't get used to it.
We have tried to extract a map of data flow information, but I think we have to use a JSON query with API in order to query Scrutinizer to pull out some information in order to make some correlation with other third-party tools. We never had the opportunity to do this. It is something that would be nice to do, but it's very labor intensive. I really would like to exploit the metadata to match it with other applications using the API, but this is not yet available. I'm not sure that we'll go that way because all the work that we have to do in order just to extract the metadata from Scrutinizer. We'll have to correlate with all the information from other systems. For that reason, I'm not sure it's going to happen. It will be very interesting though. I would like them to improve the update process. It's so complicated now that it switched to Linux. This makes the server more stable because before we were running it on Windows. The fact that they use Linux is very good and makes it more stable. However, updates never happen in one day or on our own. So, every time we need to call Plixer to proceed with the update, and they are very efficient in that. However, if they could make it a bit easier to upgrade, e.g., a click from the web interface to update the system, this would be nice. For updating the Scrutinizer platform, when we have the actual data, it never happens in one day. Every time we have the data, we are obliged to install a new server in order to integrate the old data, and every time it has a problem. Most of the time, we were obliged to scrap all the data because we couldn't transfer it to the new server. So, it would be very good if they could improve this part. Concerning the NetFlow, we have encountered many issues with some routers that don't send proper tickets. All the time, we're obliged to logon to SSH and run pcap. Pcap is just the packet capture. We are obliged to enter into the Linux to run some pcap on the common line, which is not great. It would be very nice if they integrated the pcap features through the web in order to analyze them. It's very easy. Most of the tools that we're using, and that are on the market, provide this feature. It would be great if Plixer integrated the pcap functionality through the web interface without having to enter into the Linux system. The security part could also be improved. It would be great if they could implement a better algorithm inside the Scrutinizer to detect if there were attacks. The current algorithm to check if there has been a DNS attack is very light.
There is room for improvement around the data that they have on the website about solutions. I understand that putting a particular appliance into any given organization is going to bring its own challenges — and Plixer does do a good job of blogging it — but they should have more templated solutions on their website. Going out and identifying how to do RTP performance with a Cisco router, or how to do application response times in an Arrista data center deployment was where most of the work was. We had to identify the end-vendor's configuration where Scrutinizer worked. They should spend some more time documenting solutions and putting together white papers.
I would like to see a better user interface for creating what they call maps. The solution creates a visual map of a particular location and how the network flows. You need to spend time to generate all those maps. If they could figure out a way to reduce the time needed to generate the maps, that would be great.
Knowing that they're coming out with a new user interface, that is an area where there is room for improvement. There are so many variables. They should limit the variables in the user interface and create some classes, like "simple," "novice," and "expert" to narrow down the variables within it.
They're working on the security areas, so it can provide more insight. What they have is still pretty much IP-concentric. If they were to make it IP and URL, they'd be a little bit ahead on that. I'm not sure exactly where they're at on that topic.
One of the areas that needs to be looked at is how the databases are created and managed, because they are collecting a massive amount of data. It's a big-data model. The reporting structure, the front-end GUI, also needs some work. It needs some getting used to. It works fairly well, but it's a technical tool rather than a user tool. You have to understand the structure of the databases before you can really use it. Work is needed on how the front-end user-tool accesses the data and what decisions it makes in terms of accessing that data to get you the response that you need.