Our use case for the solution is monitoring our cloud configurations for security. That use case, itself, is huge. We use the tool to monitor security configuration of our AWS and Azure clouds. Security configurations can include storage, networking, IAM, and monitoring of malicious traffic that it detects.
We have about 50 users and most of them use it to review their own resources.
If, for a certain environment, someone configures a connection to the internet, like Windows RDP, which is not allowed in our environment, we immediately get an alert that says, "Hey, there's been a configuration of Windows Remote Desktop Protocol, and it's connected directly to the internet." Because that violates our policy, and it's also not something we desire, we will immediately reach out to have that connection taken down.
We're also integrating it into our CI/CD pipeline. There are parts we've integrated already, but we haven't done so completely. For example, we've integrated container scanning into the CI/CD. When they build a container into the pipeline, it's automatically deployed and the results come back to our console where we're monitoring it. The beauty of it is that we give our developers access to this information. That way, as they build, they actually get near real-time alerting that says, "This configuration is good. This configuration is bad." We have found that very helpful because it provides instant feedback to the development team. Instead of doing a review later on where they find out, "Oh, this is not good," they already know: "Oh, we should not configure it this way, let's configure it more securely another way." They know because the alerts are in near real-time.
That's part of our strategy. We want to bring this information as close to the DevOps team as possible. That's where we feel the greatest benefit can be achieved. The near real-time feedback on what they're doing means they can correct it there, versus several days down the road when they've already forgotten what they did.
And where we have integrated it into our CI/CD pipeline, I am able to view vulnerabilities through our different stages of development.
It has enhanced collaboration between our DevOps and SecOps teams by being very transparent. Whatever we see, we want them to see. That's our strategy. Whatever we in security know, we want them to know, because it's a collaborative effort. We all need each other to get things fixed. If they're configuring something and it comes to us, we want them to see it. And our expectation is that, hopefully, they've fixed it by the time we contact them. Once they have fixed it, the alert goes away. Hopefully, it means that everyone has less to do.
We also use the solution's ability to filter alerts by levels of security. Within our cloud, we have accounts that are managed and certain groups are responsible. We're able to direct the learning and the reporting to the people who are managing those groups or those cloud accounts. The ability to filter alerts by levels of security definitely helps our team to understand which situations are the most critical. They're rated by high, medium, and low. Of course we go after the "highs" and tell them to fix them immediately, or as close to immediately as possible. We send the "mediums" and "lows" to tickets. In some instances, they've already fixed them because they've seen the issue and know we'll be knocking on the door. They realize, "Oh, we need to fix this or else we're going to get a ticket." They want to do it the right way and this gives them the information to enable them to make the proper configuration.
Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. When there's an alert and an issue, in the event it tells you how to fix it. It will say, "Go to this, click on this, do this, do that." It will tell you why you got the alert and how to fix it.
In addition, the solution’s ability to show issues as they are discovered during the build phases is really good. We have different environments. Our low environments are dev, QA, and integrations, environments that don't have any data. And then we have the upper environment which actually has production data. There's a gradual progression as we go from the lower environments and eventually, hopefully, they figure out what to do, and then go into the upper environment. We see the alerts come in and we see how they're configuring things. It gives us good feedback through the whole life cycle as they're developing a product. We see that in near real-time through the whole development cycle.
I don't know if the solution reduces runtime alerts, but its monitoring helps us to be more aware of vulnerabilities that come in the stack. Attackers may be using new vulnerabilities and Prisma Cloud has increased the visibility of any new runtime alerts.
It does reduce alert investigation times because of the information that the alerts give us. When we get an alert, it will tell us the source, where it comes from. We're able to identify things because it uses a protocol called a NetFlow. It tracks the network traffic for us and says, "This alert is generated because these attackers are generating alerts," or "It's coming internally from these devices," and it names them. For example, we run vulnerability scanning weekly in our environment to scan for weaknesses and report on them. At times, a vulnerability scanner may trigger an alert in Prisma. Prisma will say, "Oh yeah, something is scanning your environment." We're able to use this Prisma information to identify the resources that have been scanning our environment. We're able to identify that really quickly as our vulnerability scanner and we're able to dismiss it, based on the information that Prisma provides. Prisma also provides the name or ID of a particular service or user that may have triggered an alert. We are able to reach out to that individual to say, "Hey, is this you?" because of the information provided by Prisma, without having to look into tons of logs to identify who it was.
Per day, because Prisma gives us the information and we don't have to do individual research, it saves us at least one to two hours, easily and probably more.
One of the most valuable features is monitoring of configurations for our cloud, because cloud configurations can be done in hundreds of ways. We use this tool to ensure that those configurations do not present a security risk by providing overly excessive rights or that they punch a hole that we're not aware of into the internet.
One of the strengths of this tool is because we, as a security team, are not configuring everything. We have a decentralized DevOps model, so we depend on individual groups to configure their environments for their development and product needs. That means we're not aware of exactly what they're doing because we're not there all the time. However, we are alerted to things such as if they open up a connection to the internet that's bringing traffic in. We can then ask questions, like, "Why do you need that? Did you secure it properly?" We have found it to be highly beneficial for monitoring those configurations across teams and our DevOps environment.
We're not only using the configuration, but also the containers, the container security, and the serverless function. Prisma will look to see that a configuration is done in a particular, secure pattern. When it's not done in that particular pattern, it gives us an alert that is either high, medium, or low. Based on those alerts, we then contact the owners of those environments and work with them on remediating the alerts. We also advise them on their weaker-than-desirable configuration and they fix it. We have people who are monitoring this on a regular basis and who reach out to the different DevOps groups.
It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running. And we're also moving into serverless, where it runs off of codes, like Azure Functions and AWS Lambdas, which is a strip line of code. We're using Prisma for monitoring that too, making sure that the serverless is also configured correctly and that we don't have commands and functions in there that are overly permissive.
The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better.
One additional feature I'd like to see is more of a focus on API security. API security is an area that is definitely growing, because almost every web application has tons of APIs connecting to other web applications with tons of APIs. That's a huge area and I'd love to see a little bit more growth in that area. For example, when it comes to the monitoring of APIs within the clouded environment, who has access to the APIs? How old are the APIs' keys? How often are those APIs accessed? That would be good to know because they could be APIs that are never really accessed and maybe we should get rid of them. Also, what roles are attached to those APIs? And where are they connected to which resources? An audit and inventory of the use of APIs would be helpful.
I've been using Palo Alto Prisma for about a year and a half.
The scalability is "average".
Palo Alto's technical support for this solution is okay.
We did not have a previous solution. It was the same solution called Redlock, which was then purchased by Palo Alto.
The initial setup took a day or two and was fairly straightforward.
As for our implementation strategy, it was
- add in the cloud accounts
- set up alerting
- fine tune the alerts
- create process to respond to alerts
- edit the policies.
In terms of maintenance, one FTE would be preferable, but we do not have that.
We implemented it ourselves, with support from Prisma.
One thing we're very pleased about is how the licensing model for Prisma is based on work resources. You buy a certain amount of work resources and then, as they enable new capabilities within Prisma, it just takes those work resource units and applies them to new features. This enables us to test and use the new features without having to go back and ask for and procure a whole new product, which could require going through weeks, and maybe months, of a procurement process.
For example, when they brought in containers, we were able to utilize containers because it goes against our current allocation of work units. We were immediately able to do piloting on that. We're very appreciative of that kind of model. Traditionally, other models mean that they come out with a new product and we have to go through procurement and ask, "Can I have this?" You install it, or you put in the key, you activate it, and then you go through a whole process again. But this way, with Prisma, we're able to quickly assess the new capabilities and see if we want to use them or not. For containers, for example, we could just say, "Hey, this is not something we want to spend our work units on." And you just don't add anything to the containers. That's it.
The biggest lesson I have learned while using the solution is that you need to tune it well.
The Prisma tool offers a lot of functionality and a lot of configuration. It's a very powerful tool with a lot of features. For people who want to use this product, I would say it's definitely a good product to use. But please be aware also, that because it's so feature rich, to do it right and to use all the functionality, you need somebody with a dedicated amount of time to manage it. It's not complicated, but it will certainly take time for dedicated resources to fully utilize all that Prisma has to offer. Ideally, you should be prepared to assign someone as an SME to learn it and have that person teach others on the team.
I would rate Prisma Cloud at nine out of 10, compared to what's out there.