What is our primary use case?
Our use case for Nexus is to monitor all of our dependencies and the main thing we're using it for is tracking vulnerabilities listed against those.
How has it helped my organization?
It gives alerts for new vulnerabilities before our clients do, so we have time to review them, audit them, and determine how we need to proceed with resolving the issues before we get any client communication.
Before we had this in place, we had a much more reactive approach to CVE listings. Since integrating this, and as we've refined our process over the past eight months or a year, we have moved to a proactive approach allowing auditing and decisions on mitigation before any incoming client submissions.
In addition, it has brought open-source intelligence and policy enforcement across our software development lifecycle. As a component of the lifecycle, it gives us more controls in place. As far as bringing in dependencies goes, we're able to see what a dependency is introducing, from a security and licensing perspective, before we publish a release to the public. So within the build stage, if we pull in a new dependency, Nexus will very quickly tell us whether it has issues or not. And we catch it. It scans in the build stages; we have it checking our staging where we're doing our regression; and it's also monitoring our released branches and letting us know if issues are found in our releases. It really does hit all stages of that lifecycle.
What is most valuable?
I like the JIRA integration, as well as the email notifications. They allow me to see things more in real-time without having to monitor the application directly. So as new items come in, it will generate a JIRA task and it will send me an email, so I know to go in and have a look at what is being alerted.
The policy engine is really cool. It allows you to set different types of policy violations, things such as the age of the component and the quality: Is it something that's being maintained? Those are all really great in helping get ahead of problems before they arise. You might otherwise end up with a library that's end-of-life and is not going to get any more fixes. This can really help you to try to get ahead of things, before you end up in a situation where you're refactoring code to remove a library. The policy engine absolutely provides the flexibility we need. We are rolling with the default policy, for the most part. We use the default policy and added on and adjusted it a little bit. But, out-of-the-box, the default policy is pretty good.
The data quality is good. The vulnerabilities are very detailed and include links to get in and review the actual postings from the reporters. There have been relatively few that I would consider false positives, which is cool. I haven't played with the licensing aspect that much, so I don't have any comment on the licensing data. One of the cool things about the data that's available within the application is that you can choose your vulnerable library and you can pull up the component information and see which versions of that library are available, that don't have any listed vulnerabilities. I've found myself using that a lot this week as we are preparing for a new library upgrade push.
The data quality definitely helps us to solve problems faster. I can pull up a library and see, "Okay, these versions are non-vulnerable," and raise my upgrade task. The most valuable part of the data quality is that it really helps me fit this into our risk management or our vulnerability management policy. It helps me determine:
- Are we affected by this and how bad is it?
- How quickly do we need to fix this? Or are we not affected?
- Is there any way to leverage it?
Using that data quality to perform targeted, manual testing in order to verify that something isn't a direct issue and that we can designate for upgrade for the next release means that we don't have to do any interim releases.
As for automating open-source governance and minimizing risk, it does so in the sense of auditing vulnerabilities, thus far. It's still something of a reactive approach within the tool itself, but it comes in early enough in the lifecycle that it does provide those aspects.
What needs improvement?
The biggest thing that I have run into, which there are ways around, is being able to easily access the auditing data from a third-party tool; being able to pull all of that into one place in a cohesive manner where you can report off of that. We've had a little bit of a challenge with that. There are a number of things available to work with, to help with that in the tool, but we just haven't explored them yet.
For how long have I used the solution?
We're going on our second year using the solution.
What do I think about the stability of the solution?
I've never had any stability issues with the application. I haven't performed any of the upgrades, but we've never had any downtime and we've never had any issues with notifications or an inability to access the information we need.
How are customer service and technical support?
The technical support is fantastic. I reached out with a suspected false negative and had a response within hours, and within the next day they had determined that, yes, it was a false negative and, that same day, the notification came in when they had resolved the issue. So within less than 48 hours of reporting a false negative, I had a full turnaround and the result returned in the tool.
Which solution did I use previously and why did I switch?
Before IQ server we used an open-source solution called OWASP Dependency-Check. We wanted something a little more plug-and-play, something a little more intuitive to configure and automate.
How was the initial setup?
For the initial deployment, it was in place within a couple of days of starting the trial.
We did have an implementation strategy sketched out as far as requirements for success during the PoC go. The requirements were that it would easily integrate into our pipeline, so that it was very automated and hands-off. Part of the implementation strategy was that we expected to use Jenkins, which is our main build-management tool.
In terms of the integrations of the solution into developer tooling like IDEs, Git repos, etc., I wasn't really part of the team that was doing the integration into the pipeline, but I did work with the team. We didn't have any problems integrating it. And from what I did see, it looks like a very simple integration, just adding it straight into Jenkins. It integrated quite quickly into the environment.
At this point we haven't configured it to do any blocking or build-blocking just yet. But that's something we'll be reviewing, now that we have a good process.
What was our ROI?
We have absolutely seen ROI with Sonatype. The more proactive approach is definitely a return on investment. It significantly lowers the turnaround for responding to incoming issues. It also empowered our support staff to be able to pass along audit results without having to loop in the security team directly. There is a much lower overhead involved when doing it that way.
Also, the ability to better manage our vulnerability management by getting the detailed information from the scan results or the listings, and being able to audit them thoroughly and test them really helps with development resources in our case. We do not have to cram in a bunch of upgrades just for the sake of upgrading if we're constrained elsewhere. It really helps prioritize dev resources.
I don't know if it has directly saved time in releasing secure apps to market. It has definitely made everything more efficient, but unless things are critical and can definitely be leveraged, we don't necessarily delay a release.
The upgrade processes are definitely a quicker turnaround because it allows us to actually target versions that are not vulnerable. But it is hard to quantify whether, in the grand scheme of things, our developers are more productive as developers.
Which other solutions did I evaluate?
We looked at things like Black Duck, White Source, and White Hat.
The biggest issue, and this is why we went with Nexus, is that there were more results and there were far fewer false positives than in the other tools.
What other advice do I have?
Take some time configuring your notifications and your JIRA integration properly, along with the policy tweaks. As you integrate and as you first deploy the tool, don't block any builds until you start to catch up on any issues that may be there. Really spend some time with that policy review and make sure it encompasses and aligns with your vulnerability management policy appropriately.
It is incorporated in all of our software branches, and we keep our most recent end-of-life branch active in it just to monitor for critical issues, so we can notify the community to upgrade. We may also add our new mobile application to it.
Nexus Lifecycle is definitely a nine out of 10. I would say 10 if it were a little easier to get the audit information out. Again, there are ways around that so I am not taking off much for that. It's a solid nine. The results are amazing. The quality of the data coming back is great. The audit interface is easy to use.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.