What is our primary use case?
We use this solution for performance and load test different types of web-based applications and APIs. We want to make sure that before any application or any upgrade to an existing application is made available to an actual user, it is sufficiently tested within the organization.
We want to ensure that if there is a high volume of users, they have a seamless experience. We don't want them to experience slowness or an interruption in service, as a result of an increase in the number of users on the web service or website. Essentially, we test to guarantee that all of our users have a good experience.
How has it helped my organization?
When it comes to delivering enterprise-level testing capabilities, this solution is really good.
Using this tool, we are able to test an application end-to-end from any area. Specifically, we are able to test our applications that are used across geographies. This includes worldwide locations starting from one end of Asia to the other end of the Americas. Geographically, we have full testing coverage for virtually all of our enterprise applications.
In terms of application coverage, there have been very few or no applications at the enterprise level that we have not been able to test using this tool. I think there is only one, but that was a unique case. Apart from that, at an enterprise level, in terms of coverage and geographically as well as technically, we have been able to test everything using this solution.
OpenText has a platform where I can share what is good and what further improvements I can make. There is also a community where we can leave feedback.
As an admin, I have the ability to copy all of the details from one project to another. However, I don't recall functionality for cross-project reporting. If there are two projects available then I cannot run a load test or report metrics from the other project.
LoadRunner Enterprise offers multiple features to perform a deep dive into a project. For example, we can see how many load tests of a particular application were run over a certain period of time. We can also see what scripts and tests were built over a time period. There is lots of information that it provides.
It is very important that we are able to drill down into an individual project because we sometimes have to look into what set of tests was executed for a particular project, as well as how frequently the tests were run. This helps us to determine whether the results were similar across different executions, or not. For us, this is an important aspect of the functionality that this tool provides.
One of the major benefits, which is something that we have gained a lot of experience with, is the internal analytics capability. It has multiple graphical and analytical representations that we can use, and it has helped us a lot of times in pinpointing issues that could have caused SEV1 or SEV2 defects in production.
We found that when we ran the load test, those issues were identified by using the analytic graphs that LoadRunner provides. Based on this knowledge, we have been able to make the required corrections to our applications. After retesting them, we were able to release them to production. This process is something that we find very useful.
In terms of time, I find it pretty reasonable for test management. There are not too many things that we have to do before starting a load test. Once one becomes good at scripting, it does not take long. Of course, the length of time to run depends on how big and how complex the script is. Some load tests have five scripts, whereas some have between 25 and 30 scripts. On average, for a test with 10 scripts, the upper limit to set it up and run is a couple of hours.
Overall, we don't spend too much time setting up our tests.
What is most valuable?
One of the most valuable features of this solution is recording and replaying, and the fact that there are multiple options available to do this. For example, a normal web application can be recorded and replayed again on many platforms. Moreover, it can be recorded in different ways.
An application can be recorded based on your user experience, or just the backend code experience, or whether you want to record using a different technology, like a Java-specific recording, or a Siebel-specific recording. All of these different options and recording modes are available.
The scheduling feature is very helpful because it shows me time slots in calendar format where I can view all of the tests that are currently scheduled. It also displays what infrastructure is available to me to schedule a load test if I need to.
What needs improvement?
Something that is missing is a platform where I can share practices with my team. I would like to be able to inform my team members of specific best practices, but at this point, I can only share scripts and stuff like that with them. Having a private community for my own team, where I can share information about best practices and skills, would be helpful.
OpenText needs to improve in terms of support. With the same support plan but when the product was owned by HP, support was more responsive and better coordinated.
The monitoring and related analytical capabilities for load tests should be brought up to industry standards. This product integrates well with tools like Dynatrace and AppDynamics but having the built-in functionality improved would be a nice thing to have.
For how long have I used the solution?
I have been using OpenText LoadRunner Enterprise for approximately 15 years. It was previously known as Performance Center and before that, it was simply LoadRunner. In terms of continuous, uninterrupted usage, it has been for approximately nine years.
I am a long-time user of OpenText products and have worked on them across multiple organizations.
What do I think about the stability of the solution?
Our tool is hosted on-premises and we have not faced stability issues as such. One of the problems that we sometimes experience is that suddenly, multiple machines become unresponsive and cannot be contacted. We call these the load generators in LoadRunner nomenclature. When this happens, we have to restart the central server machine and then, everything goes back to normal. That sort of issue happens approximately once in six months.
Apart from that, we have not observed any stability issues. There are some defects within the tool which from time to time, we have raised with OpenText. If they have a fix available, they do provide it. Importantly, it does not make the product unusable until that is fixed.
What do I think about the scalability of the solution?
This product is easy to scale and as a user, we have not encountered any such issues. Over time, if I have to add more machines to monitor, or if I have to add more machines to use during a load test, it's pretty straightforward.
If I compare it with other tools, I would say that it does not scale as well. However, as a user, it is okay and I've never faced any issues with adding more machines.
How are customer service and technical support?
Whenever we have any support required from OpenText, the process begins with us submitting a ticket and they normally try to solve it by email. But if required, they are okay with having a video conference or an audio conference. They use Cisco technology for conferencing and they are responsive to collaboration.
Unfortunately, technical support is not as good as it used to be. From an end-user perspective, coming from both me and several of my team members, we have found that over the last year and a half, the quality of support has gone down a couple of notches. It has been since the transition from HP to OpenText, where the support is simply no longer at the same level.
The level of support changes based on the plan that you have but our plan has not changed, whereas the responsiveness and coordination have. Generally speaking, interacting with HP was better than it is with OpenText, which is something that should be improved.
Which solution did I use previously and why did I switch?
I have not used other similar tools.
How was the initial setup?
I have not set up other tools, so I don't have a basis for comparison. That said, I find that setting up LoadRunner Enterprise is not very straightforward.
Whether it's an initial setup or an upgrade to our existing setup, it's very time-consuming. There are lots of things that we have to look into and understand throughout the process. It takes a lot of time and resources and that is one of the reasons we are considering moving to the cloud version. Ideally, our effort in upgrading to the newer versions is reduced by making the transition. The last couple of upgrades have been very consuming in terms of time and effort, which could have been spent on more productive work.
To be clear, I was not involved in setting it up initially. Each time we deploy this product, we set it up as a new one but use our older version as a base. Prior to the configuration, we have to update it. However, it is older and it does not upgrade, so we have to install it as a new version. I do not see a significant difference in time between installing afresh and upgrading an existing installation.
If I am able to identify the needs and what is required, from that point, it takes almost the same amount of time whether it is a clean install or an upgrade. The biggest challenge with LoadRunner Enterprise is to identify the database that we're using and then upgrade it. As soon as the database is upgraded successfully, 70% to 75% of the work is complete. It is the biggest component, takes the longest, and is the most effort-consuming as well.
What about the implementation team?
I am involved in the installation and maintenance, including upgrades.
What's my experience with pricing, setup cost, and licensing?
I have not been directly involved in price negotiations but my understanding is that while the cost is a little bit high, it provides good value for the money.
Which other solutions did I evaluate?
I did not evaluate other tools before implementing this one.
What other advice do I have?
At this time, we do not make use of LoadRunner Developer Integration. We are thinking of migrating to the latest version of LoadRunner, which probably has the LoadRunner Developer functionality. Once we upgrade to the new version, we plan to use it.
We are not currently using any of the cloud functionality offered by OpenText. In our organization, we do have multiple applications that are hosted on the cloud, and we do test them using LoadRunner Enterprise, but we do not use any component of LoadRunner Enterprise that is hosted on the cloud.
I am an active member in several online communities, including LinkedIn, that are specific to performance testing. As such, I have seen different experts using different tools, and the overall impression that I get from LoadRunning Enterprise is that it offers good value for the price. The level of coverage in terms of scripting and analysis had helped to solidify their position as a market leader, at least a decade ago.
Nowadays, while others have closed the gap, it is still far ahead of other tools in the space. My advice is that if LoadRunner Enterprise can be made to fit within the budget, it is the best tool for performance testing and load testing.
I would rate this solution an eight out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Hi Diego,
Many thanks for the comments.