The initial use case was purely endpoint performance monitoring, but one of the key things that really shone about Aternity as a product was that the use cases were extremely broad. It became, without a doubt, our most important asset management tool.
It was used for productivity management — that was also a very strong use case.
Another use case was compliance and security, because one of the key things that we started to leverage it for was monitoring when people were turning off or disabling antivirus products. We could measure that with Aternity and then take action. It was really great at compliance and security.
We also used it for application performance, obviously. It provided super-deep levels of insight into applications through performance tracking.
We also used it for cost reduction when it came to unused, licensed software. Adobe was a big one; Visio, Project, Access, etc. We managed to drop our spend quite heavily by using it for that.
One of the key benefits was when it came to buying. From a procurement team perspective, very often what would happen was that when they were going to buy new IT hardware, they would go to a couple of vendors — big names, like Microsoft, Apple, etc. — and the vendors would give them half a dozen test devices each, and then they would deploy those to various people and wait for feedback. Normally, the feedback would be very human and very speculative. More than likely, the person who got the super-shiny, super-sexy MacBook Pro or Surface Pro, would say, "Yeah, I really like it. It's amazing. It does the job." But what we were actually able to do with Aternity was scientifically measure which asset was giving us the best performance for the spend.
We actually found, in some instances, that it wasn't always the most expensive laptop that was performing the best. It was the one that actually managed to run the company's image optimally. We were able to really save when it came to spending, and do so scientifically. We did not need to solicit feedback from people. The feedback was present in the tool. So when it came to buying, we knew exactly what to buy, at the right price point, for that performance. There were big savings there.
Another key thing that we weren't anticipating saving a lot of money on was network capacity. There were some really interesting dashboards that you can get to in Aternity, out-of-the-box, no configuration needed. They showed top talkers on a per-site basis. If you've got a really distributed organization — our company had offices in 200 countries — each country will procure network infrastructure from whichever incumbent in that nation is the easiest or the cheapest or the best one to get it from. You end up with a very complicated network. In the third-world regions, it's a lot of ADSL. In the more metropolitan areas, in first-world countries, you're getting expensive lease line, or fibre, or dark fibre. For traditional network monitoring solutions, it can become quite challenging, especially when bandwidth and things like that are changing regularly. But what Aternity would allow us to do is actually see individuals who were taxing the network from an endpoint perspective, and we could tackle that on an individual-by-individual basis.
We could also give advice to local IT leaders on whether or not their bandwidth was appropriate for what they were doing. In some instances, we were able to tell people that they could actually shrink the capacity that they were paying for because it was unnecessary. There were all sorts of "edge" use cases. Your ability to save money and to improve performance and to improve productivity with Aternity, is limited only by the imagination of the team that is in charge of the tool.
The solution also provides metrics about actual employee experience of all business-critical apps, rather than just a few. You need to create signatures so that the tool can monitor them appropriately, but it is very agnostic. You need to point Aternity at the thing that you want visibility into, and it gives you exactly that, and in the ways that you want it. You're measuring it from the user, from the inside out, and from the outside in. It gives you very different levels of perspective compared to standard, traditional IT monitoring tools that you use: SNMP, pings, polls. Those conventional, old-world metrics are very easy to dispute as an end-user. If you're an end-user and your experience is bad, someone telling you that the network is up and running and okay doesn't really help you. Someone telling you that the server is good doesn't help you either. It's the perspective of the monitoring with Aternity that really changed the dynamic, because all of a sudden you're able to see things from the end-user's experience. So there are far fewer occasions when you are arguing with your end-user and saying, "No, we don't see an issue." You're far more a proponent of that person's experience. You can tell much more quickly exactly what those issues are that they are experiencing.
We also used its Digital Experience Management Quadrant (DEM-Q) to see how our digital experience compared to others who use the solution. Aternity were probably one of the earlier adopters of a strategy where they would allow customers to baseline their experience against a wider marketplace. It's becoming more prevalent in other tool sets that I see across big enterprise, but it was at least 18 months ago that we started to see Aternity providing us with that capability. It was very interesting because one of the things that some of the bigger industry consultancies, like Forrester, try to do, is create "industry monoliths," where you can baseline against people within your industry. Media companies will look at other media companies; industrial transport and logistics organizations can then benchmark against each other. But where Aternity, and some of the other vendors that are doing this at the moment, brings something quite new to the marketplace, is that you're benchmarking against everyone. That allows you to really see whether what you're doing is correct for you as an organization. Are you getting the results that you need for the money you're spending?
Using DEM-Q undoubtedly affected our decisions about IT investments. It's always very difficult, especially at a large enterprise, to know that you're doing the right thing. When you go into a big purchase, especially for someone who is head of enterprise or head of IT, a key consideration is, "Am I spending the money wisely? Am I going to get return on investment?" If you are able to benchmark against your industry peers and see that you're doing the right thing, that in itself is a validation. It's a validation that you're headed in the right direction. It's a validation that you're spending the money appropriately for the improvements that you're getting.
It can also potentially help you to avoid spending money unnecessarily, because there are certain components, certain aspects of your stack, where you would need to invest heavily to get a small gain. The tool can allow you to look at whether or not that is a necessary investment. "Do I need to upgrade everyone's memory chips from 8 GB to 16 GB?" If you've got 8,000 devices, and an 8 GB memory chip costs you $100, you're looking at close to a million bucks. The tool can show you through its own metrics, and through the baselines against your industry peers, that maybe that's not a worthwhile investment. That million dollars is going to get you 5 percent, and that 5 percent is not necessarily really worth it. Outlook is going to open one second faster. Do you want to spend a million bucks so that everyone can get their emails one second faster? It's that kind of thing that makes decision making much more clinical, much simpler. When I'm sitting in front of a director and he says, "Why do you want this much money?" I want to be able to stand behind that request and say, "If I spend it, this is what you're going to get." That kind of ability to baseline, not only against your own org, but against industry peers, means that when you have those conversations, you can say those things much more confidently.
We saved on hardware refresh by considering the actual employee experience, but it was not only that. Traditionally, with refresh, there is one single metric that IT departments use for going after assets that need refreshing, and that is age. Age is the number-one metric. If you've got 10,000 devices and you get enough budget to replace, say, 1,000 of them, 99 percent of big enterprises are going to go for the oldest 1,000 devices in the estate. That's completely wrong. Just because they're the oldest, it doesn't mean they're the worst. What we were doing with Aternity was targeting the 1,000 least-performant devices; not the oldest. There wasn't some sort of guesswork, but actual science that says which 1,000 were the worst. The 1,000 human beings using those devices would gain the best levels of productivity from those devices being refreshed.
You can also see whether or not a refresh is actually necessary. This is something like "painting the Forth bridge." You paint the bridge and then you go back to the beginning and start all over again because it's taken you that long to do it. With traditional refresh programs, you replace those 1,000 devices, and then you start all over the following year, and you replace another 1,000 devices because you get the same budget. And you do that again and again. But with Aternity, you can look at it and say, "Do we need to?" Are the bottom 1,000 devices performing in such a bad way that they need refreshing? Or are they actually performing well enough that maybe you don't need to spend that $10 million this year? And you can roll that money into network upgrades, or server upgrades, or cloud migration, and wait until the end of the next financial year before you look at it again, because you can actually see.
So you're saving money, undoubtedly, but also investing properly. You're now using metrics that provide you with certainty, instead of just something as monolithic as age. "Oh, a device is three years old, let's refresh it." Sometimes a three-year-old device is perfectly adequate.
In our company, we had 55,000 laptops. On average, the refresh spend would be between $50 million and $100 million a year. We were able to turn about 10 percent of that around, meaning a savings of between $5 million and $10 million, by making sure that we were not refreshing devices that didn't need to be refreshed, and targeting the ones that were most appropriate rather than just the oldest.
It's true that the simplest way to look at these products is in the monolithic way that a financial analyst would look at return on investment. Did we save money? That's really a small part of the value that you can derive from this. The bigger bit is that if you've just replaced 1,000 old machines, and 400 people out of those 1,000 users had bad experiences with their old laptops, they get a slight improvement and they're pretty happy. If you go at it with Aternity, you actually target the 1,000 worst devices, and you're highly likely to be getting a 100 percent success rate when you give that person a new device. All 1,000 of those people are going to be happy. Your "net promoter score," your customer satisfaction, is going to be much more true, accurate, and high. It's very easy to focus only on the financials, but there's actually a big chunk that doesn't fall into financial buckets. That piece is also very good, given the more accurate, targeted approaches that you can use with Aternity.
When employees complained of trouble with applications or devices, the solution enabled us to see exactly what they saw as they engaged with apps, and hilariously so. We did some travel to remote offices to showcase some of the capabilities, and we would sit in an executive boardroom with 10 to 15 people, and troubleshoot performance issues, in the room, in front of people. There was surprise, amazement, and genuine pleasure that we would see on people's faces when we could resolve issues that they had been facing for months or years. They had been having the same issues, the same performance problems, whether it was Excel taking a long time to load, or network instability, or voice call problems, and we would fix it in minutes, in front of them in a meeting, with absolute confidence. It would just blow their minds. You would see levels of faith and trust build in minutes, because they could see that there were no shadow games. We were not hiding behind a telephone. We were sitting in front of them and fixing it tangibly, right in front of their faces. That level of confidence and trust that we built with them was completely irreplaceable.
What was even better than that was that we set aside small pockets of time each month for people to go and target the worst-performing machines, and then proactively reach out to the users. So instead of waiting for someone to complain, we would reach out to the people who were having the hardest time. We would have an IT rep phone a person and say, "Look, we can see your machine is running like absolute trash and here's a couple of things that we can do to fix it." That's just unheard of. Most people were just completely blown away by the fact that they were getting a call to make their day easier and better, and they didn't have to do anything about it.