What is our primary use case?
Currently, primary use case for the usage of AppMon is what I will call our flagship applications across the bank. We have had it about three years. Adoption was over time, one app at a time. then more and more. All of our major flagship applications now have AppMon dashboards. We do have some SaaS, but that is because the applications are cloud-based solutions. The use case frankly is about improving monitoring, system uptime, and preventing of events. If you have thresholds set correctly, along with alerting, and all the other stuff, your operational teams can find things before the field even notices.
How has it helped my organization?
We have substantially lowered incidents in our organization. It is hard to really measure it exactly from a percentage of how many have been lowered. As a general statement, there is no question that the number of incidents and the duration of incidents have dropped. Even if we do get caught blindsided by some infrastructure failure or something, our ability to pinpoint the problem through things like PurePath have dramatically reduced incident time. Whether you want to argue about AppMon, SaaS, or cloud from a business value point of view, that is tangible even for our non-technical people at the bank. They get this.
What is most valuable?
- The threshold alerting is what makes the difference.
- The PurePath stuff for deep dive analysis on problems. That is massive as far as having a benefit.
The dashboard is eye candy, because it's just a screen. It looks nice but the thresholding and alerting is what makes it meaningful because we are a 24/7 operation. As you can imagine, 2:00 AM in the morning, you can't necessarily afford to have a bunch of people staring at glass. We have to have the mechanism of the alerts, which is tied into our others systems, like xMatters. That is how it works for us.
What needs improvement?
I do not know everything that is in the hopper. What I am about to say could already be in the hopper. I am learning more about so called 7.1, be it SaaS or AppMon. Setting up the thresholds and alerting, it is complicated to understand their use cases. In other words, as a business perspective, you want to say, "I want this to alert under these conditions." However, you have to translate that in terms of all the various settings in Dynatrace. Whereas, it would be easier if Dynatrace just had a button that said, "I want this alerting use case," and I just pushed a button, then it set the 17 values behind the scenes. That would probably be a more user-friendly way. It does not require the user to understand what a threshold is or even what the different intervals of thresholds are. It is just a black box. It is like, "I want this experience," and it just figures out what to set.
For how long have I used the solution?
Three to five years.
What do I think about the stability of the solution?
The product is evolving, maybe too fast; maybe it is a little bit fragmented of an evolution. It is sort of expected in a way because the company, in my impression, is spending a lot more money. It is a function of the fact that they are growing as a company and revenue is growing, so there is probably a lot more emphasis on R&D and different product development. My expectation is that over time it will become a more unified, stable product. However, generally, from the product itself, we have not had issues with it, like something that monitors the monitor. We have not really had to worry about it.
What do I think about the scalability of the solution?
I have not been aware of any limitations. I know that the older version of AppMon, the so called classic version, had some limits on number of agents per server. However, those limits never really caused a challenge for our particular topology.
How are customer service and technical support?
As our adoption was in its infancy, we had the physical Dynatrace guardians, local from local areas, if you will, in our city. That was like our support because they were physically onsite. As our own staff became enabled and just basically knew the product, we frankly did not really need support, unless it was a product defect or something like that. In which case, we had a team within our company that was the interface for them.
Which solution did I use previously and why did I switch?
We have used siloed monitoring tools in the past. A lot of products have obviously been around for years, even before Dynatrace. Typically, they are technology or topology specific. You have got a certain operating system or environment which is the product of choice for that environment: different operating system, different environment, and different product. You will also end up with a whole lot of tools sets, even depending on if you want a synthetic use case, something like Foglight as an example. You just wind up with too many tools. Even this morning, Dynatrace's CTO talked about this very problem. I guess Dynatrace is trying to solve this with one-shoe-fits-all. Which as an organization, who would not want a multi-supported application product that can go across all the topologies, cloud, and everything else?
There was a product we used before Dynatrace. We are a big mainframe shop, so it was a mainframe product. It was really built for the IBM mainframe. Because we were heavily in mainframe and this is going back a few years now, that was the product for choice for mainframe. Then, with web-based solutions, all these applications, cloud, and everything else coming, we needed something else. I do not really quite know how it happened exactly, but somebody talked to somebody who talked to some Dynatrace person. Then, I remember actually going to the very first ever meeting where a Dynatrace person came on our site. They asked me to attend because I'm a big stakeholder and I guess it just went from there.
At the decision time, we did have a senior executive emphasis on the business that there just appeared to be too many incidents and we are a major financial institution with 40,000 employees in the field essentially generating revenue on practically a 24/7 basis. If one of the systems that they use is even down for 10 minutes, that is like $1 million lost. So, there were a lot of events and the timing was right. Whether that was good timing on Dynatrace's part, because we had a problem that we needed to improve, they came into our location, we had a marriage, and we have been with them since.
How was the initial setup?
I was involved in one of the very first implementations, but I did rely on an infrastructure team that did the physical installation and acquisition of virtual servers, as far as the agents and the nodes. I was physically involved on a team that wrote one of the very first dashboards. This is three to four years ago. It was more about just learning the product, frankly. I look back now and I can close both eyes now. At that time, it took some time getting used to it, but I would not call it overly complex.
There may have been minor things, but that was more our own people trying to understand it. We may have had it, such as, "Let's install this agent on this server," then it didn't work. Then, "Oops." You have to back it out, then three days later, put it back in. A lot of that is teething. I do not see that as a product limitation. It is just sometimes you don't necessarily know what you don't know and kick the tires a couple times. Now, whether the product could have maybe been a little easier? It's hard to say in hindsight.
I don't want to bring up Apple, but you can think of the Apple example. Apple has this idea that you just take it out-of-the-box and turn it on. That's it. That's your extent of configuration. Dynatrace isn't quite like that, but probably for a reason, because the idea that it could just work as is doesn't make sense, because the individual customer environments are just so different. You couldn't possibly have one-size-fits-all. It is almost impossible.
What about the implementation team?
We had Dynatrace guardians onsite.
What other advice do I have?
I would definitely recommend Dynatrace. I would say not to be fearful and embrace it. It is a combination of personal comfort level in your staff, so I would probably recommend you start with a medium to low profile application and just aggressively implement Dynatrace. Once you get accustomed to it, then go with the all-in adoption.
It is a great product, but your staff and your people, unless you are completely turnkeying it for someone else, they have to understand it. You implement it, and if people don't understand it and use it, then you are really not getting anywhere. That is probably the key part I would make to any recommendation, make sure you train your people or bring in the guardians or use the guardian for six months.
Our technology is constantly evolving. Obviously, the tools like Dynatrace we do hope and expect, frankly, that they will continue to evolve the AI element. I still think there is room in AI technology. Obviously it is getting better all the time. Voice assistant products are obviously the new thing now. So, there are a lot of changes in that technology. My expectation is that we will get way more sophisticated AI alerting and monitoring capability in Dynatrace and we will be happy to embrace it as it becomes available.
If I had just one solution that could provide real answers, not just data, the immediate benefit to my team would be to reduce human interpretation where you have to log on and interpret data. Any automated interpretation on a user's behalf, or operational team, it will be better.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
thanks for your frankness! It encourages me to recommend DynaTrace to Ops teams,
grtz, Erik