Off the back of Palo Alto's recent marketing video, one of the staff at Check Point put together a response for each of his claims which can be found here.
Original promo -
https://www.youtube.com/watch?v=tmaM3YHo79U&feature=youtu.be
Making bold claims about inventing technology, vendor specific capabilities, the size of your coverage, and so forth? I get that everyone has a tagline along the lines of "We are the best there is..." and nobody is aiming for "Second best" but when it comes to security the bad guys only need to win once, the good guys have to win 100% of the time.
How is it acceptable for a company to say we are 100% safe when that is a) impossible to promise, and b) untrue with a basic level of research and understanding of their equipment. This is shocking behavior. If I sold you a '100% safe' bullet proof vest but when you put it on it had big holes in it you would want a refund.
I said in a recent article that independent reviews should be the only way forward. I stand by that, but have learned that some such as Gartner are less reliable than say NSS Labs. We all need to be more vigilant in what we do to research a product. I talk with so many individuals who have had a rep in from Vendor X and they're completely sold on the idea before you have a chance to warn them of the inaccuracies faced.
Is there a better way for us to hold these vendors accountable to their bold claims?
Does it affect your view of either side if one makes bold claims and the other side calls them on it?
I wish it wasn't necessary, but personally I like that a rebuttal has been made.
Some good responses here and I like that we're agreed on the whole 100% thing being a huge flaw. I am surprised that folk think PA are ahead of Check Point and Fortinet, possibly even Cisco currently in terms of security, as they have generally been on the wrong end of some huge vulnerability announcements and design flaws. Like all technology firms I have no doubt that PA have plans in place to revolutionize the field in their image, but all vendors do, and currently the strong support given for PA when there are bits like the above happening, is concerning.
In my opinion he is the CEO of the company and its his job to position the marketing is such manner. However, do take note that the concept presented is acceptable as a "platform" to a better secured environment.
As everybody has been saying, I too agree that there is no such thing as 100% fool proof product out there. Reduced Risk and Deterrent might be a better option.
Today organizations tend to listen and jump into the bandwagon based on vendors views and experience. As usual the vendor will skew the situation to match their offering and this is where customers fall into the trap of not knowing what is it that they are actually looking for at the first place. they get all confused.
Some key steps that a customers needs to understand before making any decision is to:
1. Understand the weakness of your own environment
2. Assess the risk potentials versus the business revenues
3. Identify root causes of the problem
4. Device out what will be needed to prevent the problem based on your own findings
5. Get help from vendors.... their opinions
6. Perform POC's on isolated sections of organizations..... and review the results
7. Work the budget to see if this is acceptable for your organization.... if not phase it out by prioritizing the issues
8. Then look for the solution
Technology is a plenty out there, but how do you apply the right or almost the right one must still be a decision of owners of the network.
Cheers
We can't hold security vendors responsible since it is the company's authorized representatives to observe due diligence in selecting appropriate solution and vendor. It is a good idea to have a process such as Request for Information (RFI), Request for Proposal (RFP) and Proof-of-Concept (POC). 3rd party independent evaluators such as Gartner and NSS Labs are just source of information that companies can make use as a starting point. However, the POC will tell us a lot more based on the actual experience during the testing. In addition, there is no 100% bullet proof due to the existence of APTs that makes our security landscape fast evolving. In effect we need to have visibility on our network, operating systems, application and databases to establish pro-active incident management so that we could respond on a timely manner.
I prefer that companies make claims like this, because then I know not to buy their products! However, when I see a competitor take the time to create a response, I can go either way. I understand the battle for market share is intense and that you are mostly appealing to low-IT-information decision makers, but generally I would rather hear the responder’s claims indirectly, as part of their efforts to advertise their own products.
In the first topic, Moti Sagey compares PA’s threat intelligence integration (Wildfire) with CP’s threat extraction technology.: without a recent evaluation of how CP’s malware detection performs, it would be hard to reach a conclusion on what is the best method of malware prevention. Certainly threat intelligence is an exploding market, and PA was one of the first in that market. So, if the assertion is; that with CP’s malware detection – you don’t need threat intelligence, perhaps this is a reasonable position to take at the firewall layer. Yet, I am not ready to discount the entire threat intelligence market. In a recent evaluation of malware detection engines where I work, Cylance was the clear leader in malware prevention. We did not test firewalls for this functionality, and certainly PA did not address the ‘0’-day threats we were hunting. But nor did we expect to have them address it. We required endpoint protection for our devices above and beyond perimeter defenses. They are mobile. This functionality is an endpoint functionality.
In the second topic; Moti is comparing application protocol detection and prevention, and it’s a quick comparison of numbers showing Checkpoint to be the leader in application level control with that vendor addressing more applications, and showing some specific areas. While this may be a good metric, it may also be a lesson in what not to pitch. i.e. that this vendor has more fine-grain controls in application areas – releasing 6 controls for one specific application where other vendors release just the single control, while not performing any better protection against malicious apps than anyone else. There are two points to make here; without a real comparison of what is a threat, and how it is prevented, this comparison is probably not valid. Numbers without a deep review of what they mean is basically propaganda. And this section is almost a red flag for something to question, rather than accept. Furthermore, if the vendor has that many dials and knobs, it may require a subject matter expert to tune it that companies cannot afford today. Never just state: “more is better”, not today.
On the third matter; who invented the stateful firewall; and here Moti is comparing claims to what is submitted on a corporate patent, to who actually came up with the idea. Anyone who has written a patent before probably knows that you don’t always get your name on what you invented. The company that owns your work does, and they select the ownership of that patent. I don’t know who invented the stateful firewall, and I doubt that it was one person – after all, it is the logical progression of what you do with a firewall after you evolve off of a proxy-based firewall. However, this issue is pointless to mention. It would be nice to know what happened in the development skunkworks back then, but is irrelevant to firewall effectiveness.
On the next matter; an assertion that PA is representing themselves as unique in market functionality. PA is relatively unique in its next generation firewall approach and was certainly the first firewall vendor to logically combine technologies (IPS, FW, Web inspection, threat intelligence, DLP, etc.) acting on the periphery into one box. That is a huge bonus. For a long time, that was indeed unique. It is argumentative to take the marketing position for that position, combine it with their description of functionality in a specific area later on in the presentation, and come up with the assertion that PA is claiming to be something that they are not. Similarly, it is argumentative to argue what features should be offered on different product line models. However, on the final point; TRAPS, PA’s purchase of the Cyvera endpoint protection product, as being 100% effective in endpoint protection is indeed a poor representation. It does not strongly qualify – is this FW prevention of exploits? No, but the idea of that functionality in your NGFW is still a good one. To make a good point here, Checkpoint needs to be compared to PA, and that was not done.
NSS Labs comparison: anyone that has researched the benchmarks for NSS labs has an idea of the areas this touches on – and it focuses well on specific firewall capabilities, in a lot of areas. …and specific functionality that is boutique in FW jargon for these areas. Applying the research in that report to a company’s specific FW requirements is a research paper in itself. Some of the points made here could be very useful – specifically the man-hour support requirements for different products. If anything, this is an area that should have been deconstructed for valid input.
My response to this question is: marketing is just that, positioning your product to highlight the strengths of its features. Comparative marketing is useful also, but only as an introduction as to why a product should be considered. After some time with the Checkpoint folks, it would be interesting to test drive their product, and some of the innovations that Checkpoint may have made.
…but not based on this set of videos. PA leapfrogged Checkpoint 5 or so years ago. If, in that time, Checkpoint has subsequently managed to leapfrog PA, it would be great to hear about how.
There is no solution available in this market prevent you from zero day or advance threats.
The only solution is available having visibility on the movement of these kind of threats, like continuous protection, which you get with retrospective security feature.
Cisco Advance Malware Protection provides retrospective security.
Security professionals often lack visibility into the scope of advanced malware in their network, struggle to contain and remediate it after an outbreak, and cannot address fundamental questions, including:
● What was the method and point of entry?
● What systems were affected?
● What did the threat do?
● Can we stop the threat and eliminate the root cause?
● How do we recover from the attack?
● How do we prevent it from happening again?
Cisco AMP for network & AMP for endpoint solves all these unanswered questions.
No firm can claim/guarantee that they provide 100% protection. A company should look at Gartner and NSS Labs, and then, if possible bring in both products to do some comparisons on features and functionality. With respect to scalability, you'll have to discern whether it scales for your environment and anticipated growth.
There is no so called %100% security. All cybersecurity vendors stated Cleary that every single one of them is a Leader in this field but the question from customer point of view: How I can trust your claim(s)?
I do like to review Magic Quadrant Report but do I fully trust the report, of course NO because the report lacking many detailed information about the research process and analysis behind it which is not the case when talking about NSS Labs and ICSA Labs.
John Maddison, Sr. Vice President, Products and Solutions at Fortinet stated "validation from organizations like NSS Labs plays a critical role to help cut through the noise customers face today. Third-party testing holds vendors to the product specifications and their performance claims so customers can make truly informed decisions instead of discovering real-world performance after they deploy a solution in their network."
We need to educate IT people from all businesses about how it is important to review NSS Labs or any other 3rd party validation company when choosing among different cybersecurity vendors. NSS Labs testing of the top cybersecurity vendors, real-world scenarios that test security effectiveness against hundreds of attacks on a daily basis and network performance.
-Palo Alto fixes issues identified by NSS Labs
www.channelweb.co.uk
-Independent lab tests find firewalls fall down on the job
www.csoonline.com
-Lesson from SecurID breach: Don't trust your security vendor
www.networkworld.com
You ask “how is it acceptable to say we are 100% safe?”. The answer is that it’s not! As you say, NO vendor can claim total safety from threats, breaches, or any other security-related metric.
I work for CA Technologies, a leading identity management vendor. We are scrupulous in our claims, in order to make sure that we don’t commit to the impossible. More specifically, we always state that our products “HELP improve security”, a far cry from promising complete safety. A vast majority of our customers have found that the products SIGNIFICANTLY help improve security, but we can’t, and don’t, claim a particularly level of security that will result from their use.
For those capabilities that are somewhat measureable, we rely on external validation to substantiate our claims. For example, all vendors promise scalability, but we have validated our identity management product (by an external analysis firm) to support 100 million users.
I agree with the premise that a vendor’s claims should be analyzed critically in order to gauge their credibility. Red flags such as “100%” should be viewed skeptically, no matter what they are promising 100% of. And, if it’s 100% security, discerning buyers should beat a hasty retreat and search out vendors who have a more realistic view of the capabilities of their product.