Our primary use case is for monitoring. We've got a number of auto finance applications and hosted applications that my teams are supporting. Apica offers outside-in visibility of what a user would experience if they were actually logging into the platform. We noticed that we were missing that outside component. We had a lot of internal monitoring in place for making sure that the user experience was good, but when it came to being able to support our users and report back on issues our users might be experiencing, and work to remediate or identify and resolve issues that our users may be experiencing from the open internet connection that they've got into our hosted environment, it was just not sufficient. So Apica is what we're using that for today to actually give us an outside-in view of what the end-user would actually experience from the beginning to the end and from their overall use case experience for our hosted applications.
We also use it to monitor the internal service platforms that we use to support our infrastructure, support our environments, and support our internal clients. We use it for monitoring port status and service statuses associated with network-based applications like FTP file transfer platforms, MQ platforms, shared services, SOA platforms, and a number of other internal platforms that we utilize the shared services across our application stacks to serve the service of our clients.
It gives us a clear line of sight into when we're actually having an external network event that's impacting our end users. Previously to implementing Apica, we would have to rely on our end users to tell us, "Hey, we can't get into the website." With Apica and regional monitoring that we have set up in our higher profile application stacks, we're able to tell if we've got a regional network issue, a national network issue, or some other network event that's occurring that may be an internal network issue that's being exposed as a manifestation of user login failures across our application stack. We didn't have that line of sight prior to implementing Apica and so it's really helpful there.
The other thing that it helps with, that is an indirect benefit of doing URL based monitoring with these types of frameworks is that we've actually caught a few expired certificates and we've also caught encryption changes that have impacted our users' ability to access the environment that maybe some service provider downstream to us has changed.
Prior to having Apica, we never really had a clear line of sight into either of those things, other than some automation that we had internally that were basically report based, and they weren't driven off of real-time data like Apica provides. When your cert expires, Apica comes back with an alert saying, "Hey, my check has failed. And the reason my check has failed is that I can't establish an SSL connection because the cert is invalid." That's a great benefit to reap from having that framework in place that wasn't anything that we ever thought of during the time that we were implementing it. Those things are really nice to have.
It's too early for me to actually give a definitive answer on whether or not it had addressed Edge because we haven't been able to build out sufficiently complex user scenarios in our synthetic monitoring areas with Apica. But from what we have set up, it will definitely give us more insight when we're dealing with a complex infrastructure-based failure event scenario. It gives us more insight as to where specific failures are occurring, because it's giving us a lot more data back that gives us detail into where the user experience is actually not functioning. From the diagnostic data that the synthetics that we do have set up from that diagnostic data that we get back because we have an incredibly complex application infrastructure and architecture for some of our apps, we are able to quickly narrow down where within that infrastructure we're actually having a problem with that diagnostic data from the synthetic logs that we get back from the alerts. I would say it does, but we don't really have really deep synthetics setup to the point where I can go to regression test my entire application stack for one of my apps. I just can't do that yet, but it's definitely something that we have in our roadmap of to-dos.
The fact that Apica offers multiple deployment options, like on-premise, hybrid, and managed cloud solutions definitely helps my company meet our security requirements. Some of the internal texts that we need to do require us to have on-premise infrastructure components. Having a hybrid option is definitely helpful. I think the other piece to that is the flexibility to be able to go entirely cloud with Apica is incredibly beneficial because then you get access to regions all around the world that you have a line of sight from that can help you with getting visibility into what's happening from a client use case perspective. For example, if I have a lot of clients in Canada because we do have application products in Canada, I could, in theory, have an Apica presence in a Canadian region that will give me a localized view of what the user experience is like in conjunction with other regional views to help me narrow down when I am seeing a problem; if it's a regional issue or if it's something that's more global in nature. That actually is usually beneficial for us.
We use its ability to use our own scripts. What we use Apica for right now has primarily been based on the importing of Selenium Flames groups that we've developed for mimicking our user transactions. We've also been working on utilizing their automation platform by a ZebraTester and we've been learning how to work with that so that we're still in the early stages with it. But we've been seeing a lot of additional potential from that ZebraTester framework as well. LoadRunner is something that we've been talking about but we haven't really explored that at this point in time.
What we've developed in Selenium is that we've been able to easily convert over into the native Apica workflow stuff for the synthetics that we have configured. Once those Selenium scripts are created, you use it once, and then it's in Apica. The results have been fantastic from that standpoint and the simplicity of being able to use something that's common and standard across the industry, in terms of using system Selenium to create your synthetic transaction scenario, makes it really easy and helpful to actually get into the platform and a little bit more of an in-depth way versus having to learn an entirely different scripting language or having to learn something new in order to do those types of things.
It's hard to say whether or not this scripting feature has saved us money or resources. Because of that flexibility, more people have been able to access that component than normally would be able to. From that standpoint, it has increased our adoption rate. It hasn't necessarily improved outside of that, but with an increased adoption rate, because it's easier to implement and easier to use by more people, we're getting more value out of the framework without having to have dedicated script or dedicated people writing automation for it.