What is our primary use case?
Primarily the intent was to have a better understanding of our cloud security posture. My remit is to understand how well our existing estate in cloud marries up to the industry benchmarks, such as CIS or NIST, or even AWS's version of security controls and benchmarks.
When a stack is provisioned in a cloud environment, whether in AWS or Azure or Google Cloud, I can get an appreciation of how well the configuration is in alignment with those standards. And if it's out of alignment, I can effectively task those who are accountable for resources in clouds to actually remediate any identifiable vulnerabilities.
How has it helped my organization?
The solution is really comprehensive. Especially over the past three to four years, I was heavily dependent on AWS-native toolsets and config management. I had to be concerned about whether there were any permissive security groups or scenarios where logging might not have been enabled on S3 buckets, or if we didn't have encryption on EBS volumes. I was quite dependent on some of the native stacks within AWS.
Prisma not only looks at the workloads for an existing cloud service provider, but it looks at multiple cloud service providers outside of the native stack. Although the native tools on offer within AWS and Azure are really good, I don't want to be heavily dependent on them. And with Google, where they don't have a security hub where you can get that visibility, then you're quite dependent on tools like Prisma Cloud to be able to give you that. In the past, that used to be Dome9 or Evident.io. Palo Alto acquired Evident.io, and that became rebranded as this cloud posture management solution. It's proven really useful for me.
It integrates capabilities across both cloud security posture management and cloud workload protection. The cloud security posture management is what it was initially intended for, looking at configuration of cloud service workloads for AWS, Azure, Google, and Alibaba. And you can look at how the configuration of certain workloads align to standards of CIS, NIST, PII, etc.
And that brings our DevOps and SecOps teams closer together. The engineering aspect is accountable for provisioning dedicated accounts for cloud consumers within the organization. There might be just an entity within the business that has a specific use case. You then want to go to ensure that they take accountability for building their services in the cloud, so that it's not just a central function or that engineering is solely responsible. You want something of a handoff so that consumers of cloud within the organization can also have that accountability, so that it's a shared responsibility. Then, if you're in operations, you have visibility into what certain workloads are doing and whether they're matching the standards that have been set by the organization from a risk perspective.
You've also got the software engineering side of the business and they might just be focused on consuming base images. They may be building container environments or even non-container environments or hosting VMs. They also have a level of accountability to ensure that the apps or packages that they build on top of the base image meet a certain level of compliance, depending on what your business risk-appetite is. So it's really useful in that you've got that shared accountability and responsibility. And overall, you can then hand that off to security, vulnerability management, or compliance teams, to have a bird's-eye view of what each of those entities is doing and how well they're marrying up to the expected standards.
Prior to Prisma cloud, you'd have to have point solutions for container runtime scanning and image scanning. They could be coupled together, but even so, if you were running multiple cloud service providers in parallel, you could never really get the whole picture from a governance perspective. You would struggle to actually determine, "Okay, how are we doing against the CIS benchmark for Azure, GCP, and AWS, and where are the gaps that we need to address from a governance and a compliance perspective so as to reduce our risk and the threat landscape?" Now that you've got Prisma Cloud, you can get that holistic view in a single pane of glass, especially if you're running multiple cloud workloads or a number of cloud workloads with one cloud service provider. It gives you the ability to look at private, public, or hybrid offerings. It saves me having to go to market and also run a number of proofs of concepts for point solutions. It's an indication of how the market has matured and how Palo Alto, with Prisma Cloud in particular, understands what their consumers and clients want.
It can certainly help reduce alert investigation times, because you've got the detail that comes with the alert, to help remediate. The level of detail offered up by Prisma Cloud, for a given engineer who might not be that familiar with a specific type of configuration or a specific type of alert, saves the engineer having to delve into runbooks or online resources to learn how to remediate a particular alert. You have to compare it to a SIEM solution where you get an event or an alert is triggered. It's usually based on a log entry and the engineer would have to then start to investigate what that alert might mean. But with Prisma Cloud and Prisma Cloud Compute, you get that level of detail off the back of every event, which is really useful.
It's hard to quantify how much time it might save, but think about the number of events and what it would be like if they didn't have that level of detail on how to remediate, each time an event occurred. Suppose you had a threshold or a setting that was quite conservative, based on a particular cloud workload, and that there were a number of accounts provisioned throughout the day and, for each of those accounts, there were a number of config settings that weren't in alignment with a given standard. For each of those events, unless there was that level of detail, the engineer would have to look at the cloud service provider's configuration runbooks or their own runbooks to understand, "Okay, how do I change something from this to this? What's the polar opposite for me to get this right?" The great thing about Prisma Cloud is that it provides that right out-of-the-box, so you can quickly deduce what needs to be done. For each event, you might be saving five or 10 minutes, because you've got all the information there, served up on a plate.
What is most valuable?
For me, what was valuable from the outset was the fact that, regardless of what cloud service provider you're with, I could segregate visibility of specific accounts to account owners. For example, at AWS, you might have an estate that's solely managed by yourself, or there might be a number of teams within the organization that do so.
You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums. In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot.
You can also automate how frequently you want reports to be generated. You can then understand whether there has been any improvement or reduction in vulnerabilities over a certain time period.
The solution also enables you to ingest logs to your preferred SIEM provider so that you've got a better understanding of how things stack up with event correlation and SIEM systems.
If you've got an Azure presence, you might be using Office 365 and you might also have a presence in Google Cloud for the data, specifically. You might also want to look at scenarios where, if you're using tools and capabilities for DevOps, like Slack, you can plug those into Prisma Cloud as well to understand how well they marry up to vulnerabilities. You can also use it for driving out instant vulnerabilities into Slack. That way, you're looking at what your third-party SaaS providers are doing in relation to certain benchmarks. That's really useful as well.
In addition, an engineer may provision something like a shared service, a DNS capability, a sandbox environment, or a proof of concept. The ability to filter alerts by severity helps when reporting on the services that have been provisioned. They'll come back as a high, medium, or low severity and then I ensure that we align with our risk-appetite and prioritize higher and medium vulnerabilities so that they are closed out within a given timeframe.
When it comes to root cause, Prisma Cloud is quite intuitive. If you have an S3 bucket that has been set to public but, realistically, it shouldn't have been, you can look at how to remediate that quite intuitively, based on what the solution offers up as a default setting. It will offer up a way to actually resolve and apply the correct settings, in line with a given standard. There's almost no thinking involved. It's on-point and it's as if it offers up the specific criteria and runbooks to resolve particular vulnerabilities.
That assists security, giving them an immediate way to resolve a given conflict or misalignment. The time-savings are really incomparable. If you were to identify a vulnerability or a risk, you might have to draw up what the remediation activity should look like. However, what Prisma Cloud does is that it actually presents you with a report on how to remediate. Alternatively, you can have dynamic events that are generated and applied to Slack, for example. Those events can then be sent off to a JIRA backlog or the like. The engineers will then look at what that specific event was, at what the criteria are, and it will tell them how to remediate it without their having to set time aside to explain it. The whole path is really intuitive and almost fully automated, once it's set up.
What needs improvement?
One scenario, in early days, was in trying to get a view on how you could segregate account access for role-based access controls. As a DevSecOps squad, you might have had five or six guys and girls who had access to the overall solution. If you wanted to hand that off to another team, like a software engineering team, or maybe just another cloud engineering team, there were concerns about sharing the whole dashboard, even if it was just read-only. But over the course of time, they've integrated that role-based access control so that users should only be able to view their own accounts and their own workloads, rather than all of the accounts.
Another concern I had was the fact that you couldn't ingest the accounts into Prisma Cloud in an automated sense. You had to manually integrate them or onboard them. They have since driven out new features and capabilities, over the last 12 months, to cater for that. At an organizational level you can now plug that straight into Prisma Cloud, as and when new accounts are provisioned or created. Then, by default, the AWS account or the Azure account will actually be included, so you've got visibility straight away.
The lack of those two features was a limitation as to how far I could actually push it out within the organization for it to be consumed. They've addressed those now, which is really useful. I can't think of anything else that's really causing any shortcomings. It's everything and more at the moment.
For how long have I used the solution?
I've been using Prisma Cloud for about 12 months now
How was the initial setup?
It's pretty straightforward to run an automated setup, if you want to go down that route. The capabilities are there. But in terms of how we approached it, it was like a plug-and-play into our existing stack. Within AWS, you just have to point Prisma Cloud at your organizational level so that you can inherit all the accounts and then you have the scanning capability and the enforcement capability, all native within Prisma Cloud. There's nothing that we're doing that's over and above, nothing that we would have to automate other than what is actually provided natively within Prisma Cloud. I'm sure if you wanted to do additional automation, for example if you wanted to customize how it reports into Slack or how it reports into Atlassian tools, you could certainly do that, but there's nothing that is that complex, requiring you to do additional automation over and above what it already provides.
What was our ROI?
I haven't gone about calculating what the ROI might be.
But just looking at it from an operational engineering perspective and the benefits that come with it, and when it comes to the governance and compliance aspects of running AWS cloud workloads, I now put aside half an hour or an hour on a given day of the week, or alternative days of the week. I use that time to look at what the client security posture is, generate a number of reports, and hand them off to a number of engineering teams, all a lot quicker than I used to be able to do so two or three years ago.
In the past, at times I would have had to run Trusted Advisor from AWS, to look at a particular account, or run a number of reports from Trusted Advisor to look at multiple accounts. And with Trusted Advisor, I could never get a collective view on what the overall posture was of workloads within AWS. With Prisma Cloud, I can just select 30 AWS accounts, generate one report, and I've got everything I need to know, out-of-the-box. It gives me all the different services that might be compliant/non-compliant, have passed/failed, and that have high, medium, or low vulnerabilities. It has saved me hours being able to get those snapshots.
I can also step aside by putting an automated report in place and receive that on a weekly basis. I've also got visibility into when new accounts are provisioned, without my having to keep tabs on whether somebody has just provisioned a new account or not. The hours that are saved with it are really quite high.
What's my experience with pricing, setup cost, and licensing?
As it stands now, I think things have moved forward somewhat. Prisma and the suite of tools by Palo Alto, along with the fact that they have integrated Prisma Cloud Compute as a one-stop shop, have really got it nailed. They understand that not all clients are running container workloads. They bring together point solutions, like what used to be Twistlock, into that whole ecosystem, alongside a cloud security posture management system, and they'll license it so that it's favorable for you as a consumer. You can think about how you can have that presence and not then be dependent on multiple third-parties.
Prisma cloud was originally destined for cloud security posture management, to determine how the configuration of cloud services aligns with given standards. Through the evolution of the product, they then integrated a capability they call Prisma Cloud Compute. That is derived from point solutions for container and image scanning. It has the capabilities on offer within a single pane of glass.
Prior to the given scenario with Prisma Cloud, you'd have to either go to Twistlock or Aqua Security for container workloads. If you were going open source, obviously that would be free, but you'd still have to be looking at independent point solutions. And if you were looking at governance and compliance, you'd have to look at the likes of Dome9, Evident.io, and OpenSCAP, in a combination with Trusted Advisor. But the fact that you can just lean into Prisma Cloud and have those capabilities readily available, and have an account manager that is priced based on workloads, makes it a favorable licensing model.
It also makes the whole RFP process a lot more streamlined and simplified. If you've got a purchasing specialist in-house, and then heads-of-functions who might have a vested interest in what the budget allocation is, from either a security perspective or from a DevOps cloud perspective, it's really quite transparent. They work the pricing model in your favor based on how you want to actually integrate with their products. From my exposure so far, they have been really flexible on whatever your current state is, with a view to what the future state might be. There's no hard sell. They "get" the journey that you're on, and they're trying to help you embrace cloud security, governance, and compliance as you go. That works favorably for them as well, because the more clients that they can acquire and onboard, the more they can share the experience, helping both the business and the consumer, overall.
Which other solutions did I evaluate?
Prior to Prisma cloud, I was looking at Dome9 and Evident.io. Around late 2018 to early 2019, Palo Alto acquired Evident.io and made it part of their Prisma suite of security tools.
At the time, the two that were favorable were Evident.io and Dome9, side-by-side, especially when running multiple AWS accounts in parallel. At the time, it was Dome9 that came out as more cost-effective. But I actually preferred Evident.io. It just happened to be that we were evaluating the Prisma suite and then discovered that Palo Alto had acquired Evident.io. For me that was really useful. As an organization, if we were already exploring the capabilities of Palo Alto and had a commercial presence with them, to then be able to use Prisma Cloud as part of that offering was really good for me as a security specialist in cloud. Prior to that, if as an organization you didn't have a third-party cloud security posture management system for AWS, you were heavily dependent on Trusted Advisor.
What other advice do I have?
My advice is that if you have the opportunity to integrate and utilize Prisma Cloud you should, because it's almost a given that you can't get any other cloud security posture management system like Prisma Cloud. There are competitors that are striving to achieve the same types of things. However, when it comes to the governance element for a head of architecture or a head of compliance or even at the CSO level, without that holistic view, if you use one of them you are potentially flying blind.
Once you've got a capability running in the cloud and the associated demand that comes through from the business to provision accounts for engineers or technical service owners or business users, the given is that not every team or every user that wants to consume the cloud workload has the required skill set to do so. There's a certain element of expertise that you need to securely run cloud workloads, just as is needed for running applications or infrastructure on-premise. However, unless you have an understanding of what you're opening up to—the risk element to running cloud workloads, such as a potential attacks or compromise of service—from an organizational perspective, it's only a matter of time before something is leaked or something gets compromised and that can be quite expensive to have to manage. There are a lot of unknowns.
Yes, they do give you capabilities, such as Trusted Advisor, or you might have OpenSCAP or you might be using Forseti for Google Cloud, and there are similar capabilities within Azure. However, the cloud service providers aren't native security vendors. Their workloads are built around infrastructure- or platform-as-a-service. What you have to do is look at how you can complement what they do with security solutions that give you not just the north-south view, but the east-west as well. You shouldn't just be dependent on everything out-of-the-box. I get the fact that a lot of organizations want to be cloud-first and utilize native security capabilities, but sometimes those just don't give you enough. Whether you're looking at business-risk or cyber-risk, for me, Prisma Cloud is definitely out there as a specialist capability to help you mitigate the threat landscape in running cloud workloads.
I've certainly gone from a point where I understood what the risk was in not having something like this, and that's when I was heavily dependent on native tools that are offered up with cloud service providers.
The first release that came out didn't include the workload management, because what happened, I believe, was that Palo Alto acquired Twistlock. Twistlock was then "framed" into cloud workload management within Prisma Cloud. What that meant was that you had a capability that looks at your container workloads, and that's called Prisma Cloud Compute, which is all available within a single pane of glass, but as a different set of capabilities. That is really useful, especially when you're running container workloads.
In terms of securing the entire development life cycle, if you integrate it within the Jenkins CI/CD pipeline, you can get the level of assurance needed for your golden images or trusted image. And then you can look at how you can enforce certain constraints for images that don't match the level of compliance required. In terms of going from what would be your image repository, when that's consumed you have the capability to look at what runtime scanning looks like from a container perspective. It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat.
It all depends on the way an organization works, whether it has a distributed or centralized setup. Is there like a central DevOps or engineering function that is a single entity for consuming cloud-based services, or is there a function within the business that has primarily been building capabilities in the cloud for what would otherwise be infrastructure-as-a-service for internal business units? The difficulty there is the handoff. Do you look at running it as a central function, where the responsibility and the accountability is within the DevOps teams, or is that a function for SecOps to manage and run? The scenario is dependent on what the skill sets are of a given team and what the priorities are of that team.
Let's say you have a security team that knows its area and handles governance, risk, and compliance, but doesn't have an engineering function. The difficulty there is how do you get the capability integrated into CI/CD pipelines if they don't have an engineering capability? You're then heavily relying on your DevOps teams to build out that capability on behalf of security. That would be a scenario for explaining why DevOps starts integrating with what would otherwise be CyberOps, and you get that DevSecOps cycle. They work closer together, to achieve the end result.
But in terms of how seamless those CI/CD touchpoints are, it's a matter of having security experts that understand that CI/CD pipeline and where the handoffs are. The heads of function need to ensure that there's a particular level of responsibility and accountability amongst all those teams that are consuming cloud workloads. It's not just a point solution for engineering, cloud engineering, operations, or security. It's a whole collaboration effort amongst all those functions. And that can prove to be quite tricky. But once you've got a process, and the technology leaders understand what the ask is, I think it can work quite well.
When it comes to reducing runtime alerts, it depends on the sensitivity of the alerting that is applicable to the thresholds that you set. You can set a "learning mode" or "conservative mode," depending on what your risk-appetite is. You might want it to be configured in a way that is really sensitive, so that you're alerted to events and get insights into something that's out of character. But in terms of reducing the numbers of alerts, it all depends on how you configure it, based on the sensitivity that you want those alerts to be reporting on.
I would rate Prisma Cloud at eight out of 10. It's primarily down to the fact that I've got a third-party tool that gives me a holistic view of cloud security posture. At the click of a button I can determine what the current status is of our threat landscape, in either AWS or Azure, at a conflict level and at a workload level, especially with regards to Prisma Cloud Compute. It's all available within a single pane of glass. That's effectively what I was after about two or three years ago. The fact that it has now come together with a single provider is why I'd rate it an eight.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.