We performed a comparison between Dynatrace and SolarWinds NPM based on real PeerSpot user reviews.
Find out what your peers are saying about Datadog, Dynatrace, New Relic and others in Application Performance Monitoring (APM) and Observability."For me the service workflow and the dashboards are the most valuable features, simply because I can know what’s going on in my infrastructure within five minutes, versus two hours."
"The PurePath feature enables you to see the path from click to database query."
"For me, the most valuable feature of this solution is that deep dive that we get out of the AppMon product with the PurePath technology, and the way that the PurePath stack works."
"The NeoLoad plugin is awesome, and it gets results from load tests correlated with test scenarios."
"The way it shows a problem on the dashboard is pretty good."
"The geographical view provides a nice visualization of performance on a map."
"We use it, in many instances, to find the root cause in production."
"We use it to monitor over a 1000 servers in AWS."
"The most valuable feature is the way it monitors the environment, and how user-friendly the console is for the end-user. The interface is also very easy and it captures all the information very well."
"The alerting and usage tracking notifications on disk space capacity, network and processor utilization."
"SolarWinds NPM is a scalable solution since it can handle a huge number of users."
"Being able to easily, and quickly obtain disc space statistics from servers and determine how much was free or used on various volumes."
"It gives us a map of the network setup and one console to see the entire network."
"The benefit of this solution is the reporting. We're able to report on and see our network in a graphical form. We are able to detect when a device is added to a network."
"We don't have any issues with the stability of SolarWinds NPM."
"The most valuable features of the solution are its graphical interface and reports."
"The one area that we get value out of now, where we would love to see additional features, is the Session Replay. The ability to see how one individual uses a particular feature is great. But what we'd really like to be able to see is how a large group of people uses a particular feature. I believe Dynatrace has some things on its roadmap to add to Session Replay that would allow us those kinds of insights as well."
"We ran into a problem where the Dynatrace JavaScript agent is returning errors, and it's very apparent that there's a problem. However, the customer support will ask us for seemingly unnecessary details instead of looking at our dashboard through their account to see what the problem is. They ask us for a lot of details not really related to solving the problem. As a result, we still have a few issues that were never resolved. They're not major issues, but they're kind of frustrating."
"When you're making that transition from AppMon, which is very dashboard-oriented, over to Dynatrace, which is no dashboards, there needs to be something in between so that business buys in a little bit. I would transition my dashboards over so that we don't have to recreate them, because recreating them is very difficult in Dynatrace. It's really hard to say, "Oh, the dashboards that you had on the team that you were using, you're not going to get over here." Or, "You have to re-create them all over again." People are going to ask questions about cost, who is going to do that."
"I do know that for the size of our organization, we're talking thousands of agents and hundreds of applications, it does get to the point where the servers themselves that house Dynatrace are at a point where, in some cases, they are just too big for one machine, since you have to have an entire application ecosystem all funnel into a single system."
"One of the new features is "impacted users." I would like to see a rate of impacted users. For example, how long has the problem been going on: 100 users in five minutes. Does that mean that in 3 hours if we don't get this solved, we're impacting x number of people? Understanding the rate at which the problem is impacting people would be a cool feature."
"It was difficult to initially use the solution, how to use it and where to navigate."
"The heavy client is not really user-friendly and the concepts (while powerful) are unintuitive."
"Dashboards are too clumsy, so it is good to keep less on dashboards and be easier to find the sections."
"We'd like to see a bit more automation in the future."
"One of the challenges with SolarWinds is that in order to pull the data, we have to have a lot of false positives."
"The price of the solution can be improved."
"Being able to detect devices that are trying to connect wirelessly would make using this solution much easier."
"It has covered everything, so no improvement is required at their end. The only thing is the price."
"My team has had a lot of issues with support."
"They need to work on their automation and automatically discover devices. They are not very good at automatically discovering devices."
"The dashboards for this solution could be improved. We would like to divide the dashboards to give a clear view to our management team to show what we have and what deficiencies exist in our network."
Dynatrace is ranked 2nd in Application Performance Monitoring (APM) and Observability with 341 reviews while SolarWinds NPM is ranked 4th in Network Monitoring Software with 147 reviews. Dynatrace is rated 8.8, while SolarWinds NPM is rated 8.2. The top reviewer of Dynatrace writes "AI identifies all the components of a response-time issue or failure, hugely benefiting our triage efforts". On the other hand, the top reviewer of SolarWinds NPM writes "High-level, comprehensive, and proactive monitoring in a user-friendly interface". Dynatrace is most compared with Datadog, New Relic, AppDynamics, Splunk Enterprise Security and Azure Monitor, whereas SolarWinds NPM is most compared with Zabbix, PRTG Network Monitor, ManageEngine OpManager, ThousandEyes and Entuity.
We monitor all Application Performance Monitoring (APM) and Observability reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.
Hi @Michael Bruen ,
Probably, the below data has already been published on PeerSpot and can help you identify the difference.
Hope this helps. Please kindly share your views.
https://www.peerspot.com/produ...