My team does not use it, but we have other teams that use LoadRunner Enterprise.
We are an SI, and we have a team that does automated testing. They use LoadRunner Enterprise and similar products, and I am continually looking for ways to improve the start-to-delivery time of changes, upgrades, and releases. We have a content server, and automated testing is up there. I have got Selenium. I have got LoadRunner Enterprise to do the automated testing of the active CI/CD pipeline. Now that LoadRunner is an OpenText product, it will make it easier for us to say that our product of choice is going to be LoadRunner. We can drop it in, and off we go and configure it, and then, hopefully, with the link that we have got, we will get this understanding of content suite, Documentum, and digital asset management, and we will get personalized and more appropriate usage profiles and starting points that we would not get if we were using JMeter or Selenium. We would not have to teach this tool about the OpenText stack.
We just finished the upgrade from one version of the code source to another. We have a whole pile of custom code from other vendors, other third parties, and OpenText as well. Doing any upgrades requires a huge amount of retesting. We have to do functional testing and regression testing. Some of that is being done manually, and some of it is being done through automated scripts. We are looking to try and pull that together so that we can get a faster turnaround time. We can also get more test coverage and get it right. We have a three-month window to get a release out, and we can only do a certain amount of manual testing with people sitting there. With the automated solutions, we can leverage a lot more power to push through and get things done, and that gives me the confidence that it works.
We implemented LoadRunner Enterprise for test coverage. With manual testing or other tools, there are only so many tests that we can run in a given time window. There are only so many tests we can do by manually sitting there and without throwing thousands of people out to testing and having consistent results coming back. When we do an upgrade, there is not just functional testing but also things like load testing and performance testing. We are rolling out Azure Information Protection in the next couple of months. One of the things we have to answer before it goes live is if we turn it on and run it in the way our business works, what is the impact when it comes to user concurrency and user profiles? Is it going to work or is it going to collapse under its own weight? Are they going to notice? What is the operational impact? What is the impact that they needed? Is it an extra ten milliseconds or an extra two minutes to upload a document? Being able to model all that and have controlled evidence available for all the different tests is helpful.
With LoadRunner now being a part of the OpenText family, there is knowledge sharing between the sibling applications. It means that LoadRunner is better positioned or should be better positioned to pick up their other solutions and support them, and we do not have to start from scratch.
We have various customers who use our solutions. Every time we roll out a new integration, change, or upgrade, there are always SLAs. There are always user expectations for how long things take. There is always bug hunting going on in regression testing and making sure that we have not broken anything. Being able to automate those kinds of tests along with the load and performance testing helps reduce the complexity and risk of those activities to the business. With a complex system, there is a risk that we change something over here, and the butterfly flaps its wings, and we end up with something broken over there. We do not see it because we are looking at that new bit over here, and by having a regression test, we can just get a certain percentage done, but with an automated tool, we come across the entire piece. We can identify a suspect and have a look at it. We can see that if we do this combination of data and this combination of settings or users, it works, and then we change this, and it now does not work. We can then fix it. As we roll out new features and functionalities, we can do performance testing. We can do load testing, and we can make sure that for these new features and functionalities, we are ready with the infrastructure to support them. If we roll it out today, and it struggles, then I know in advance that I can put more infrastructure in so that when we turn it on, things work. I can also take it year by year and slot by slot and compare the data and say that my performance metrics were within this, and they have changed. If it comes up towards the SLA or towards the kind of points where the users begin to spot it, I can take action and ask for some extra storage space. If we have 10 gigs a month and a 100 gigs drive, by month eight, we need to order some more space. We do not want to find out two weeks before we get into the tenth month. By being able to see those trends, graphs, and response times for SLAs, but also for general performance and overviews, we can begin to make sure that everyone is heading in the right direction.
For me, having a wider test coverage—not just functional testing but performance and load testing—means that I have less to worry about because I have more coverage. I can have five guys for a week pressing buttons, or I can have LoadRunner do a regression test suite of x thousand cases in the same amount of time. For coverage, it is a much more useful tool. It has a history. It is great. It is not a new product that OpenText just built themselves. They have got an industry-leading group of people who really understand things, so it is a great addition.
I do not personally use LoadRunner Developer. All the test automation is done by my test automation team. It is very important for me to be able to turn around changes for customer numbers and have things out there. When I started with the OpenText Stack, they were doing one release every two or three years, whereas now, they are doing quarterly releases. When you have got heavily customized systems, or you have got quarterly releases, and you have got all these integrations with SharePoint, SuccessFactors, Salesforce, and other types of applications, the number of things that you have to certify and accredit for testing every time you live is overall a much bigger headache in the connected world. There is just the nuance of all the different variants, data models, and business cases and scenarios that you see, so it is becoming massively unattainable to do that. Having a tool like LoadRunner to do that for you and additionally, having it as a part of the family is valuable. They are on the inside, and they have deep integrations or deep access to all the other OpenText lines of business groups and vice versa. Both sides are going to benefit and grow from that close integration. It just allows me to be able to say that I can shift left because I have the confidence that automated testing has done all my use cases. I have got my performance data. I have got my load data, and I am more within SLA. Trying to do that manually or with some other tools takes a lot longer to get to that, and we do not always get the complete coverage that we want. There is always some risk around how many test cases you can manage running in a week versus just running a tool to plow through them.
Most of my test automation guys use the scripting engine and some of the drag-and-drop features depending on what they are doing, such as building the usage profiles and the automation for the things that they want to do. The tool is shortening the amount of time it takes us to build those automated test use cases. We have use cases for adding a document as a user, as a records management user, as an admin, as someone who has sysadmin rights, or as someone whose first language is French. We get all of these little nuances of access and permissions and everything else, and it becomes interesting. Being able to use tools to deliver all that makes life easier for me.
The more I spend time with these tools, the more I understand not just the technical benefits but also how they benefit the end-user community. I understand how they can be used to deliver business goals rather than just technical metrics and help shape what we do and give a reflective user experience. We get to compare what our users do on that client system, and we can quickly build user profiles for that particular client rather than throwing 200 generic requests. We do not work like that because then you end up with incorrect data, and you make wrong designs.
LoadRunner Enterprise has helped streamline our testing processes. My team is primarily focused on the content management space where we have the cadence of quarterly releases. We have our own custom modules, and there are custom builds that we have done for customers. We are quickly able to do regression testing, performance testing, load testing, and new feature testing. We are able to test all the different permutations of user location and object types. We are able to test different variants such as adding a document and then adding an email here or there. We are able to package that part and run it as a part of our deployment pipeline, which allows us to align more closely with clients. We are not a year or two behind of where they were, which used to be the only way of doing it previously. It also assures us that even though we are going faster, we are actually doing more while going faster. We are not going faster and doing less. We are now getting more value.
LoadRunner Enterprise reduced our workload or made our life easier. They are a pain to set up, but we have to invest the time to set up the profiles for the users in terms of what the users do on the system. Do they add documents? Do they read documents? We have to understand what those profiles look like so that we can replicate what the client would do, but once we get that and we run them, I know that they reflect my client. They are an accurate reflection of what the client does. I know that the regression testing is going through all the historic and new things, and everything works. With the performance or load testing, I know that as I turn on any feature in prod, which is always different from every other environment, it is going to work. It is not going to collapse under its own weight. It is going to be performance within SLA and within expectation for end users without causing too many problems, so it becomes a useful armory to be able to say to the business that this is going to work. We are able to model situations where we turn something on, and it does not work because we do not have enough infrastructure. We turn it off again and put more infrastructure in, and we turn it on again and see if it works. Being able to model all of that and find thresholds or breakpoints or being able to prove that we can get the throughput makes the conversations easier. It makes the journey smoother as we do things.
LoadRunner Enterprise has helped improve our product quality.