The main difference is the interface; the look and feel have changed, but the background setup and configuration remain the same. The project admin team had already set up LoadRunner Cloud in our environment. Since AI plays a major role in today's world, many tools are expected to integrate with it. If LoadRunner has AI integration, that would be a great feature. In past projects, including those with LoadRunner and NeoLoad, clients often asked about integrating CI/CD pipelines, such as using Jenkins to automate the triggering process. I’ve done POCs on this, and it’s possible. Once set up, the pipeline can automatically execute tests without manual intervention.
Senior Manager, Performance Engineering at Enel Group
Real User
Top 20
2024-06-03T19:21:00Z
Jun 3, 2024
Our company provides actionable intelligence to the development community and to DevOps. The solution has helped optimize the application in terms of the code base and tuning the environment. Some improvements can be made in the solution's user interface and BLS. The vendor adopts new technologies quite late.
One area for improvement in LoadRunner Cloud, especially for agile models, is its limited support for functional testing alongside its robust non-functional testing capabilities. Unlike some other tools in the market that offer both functional and non-functional testing within a single platform, LoadRunner requires separate test scripts for each, doubling the testing effort.
The product must provide agents to monitor servers. If we want to monitor our servers, we should be able to do it by integrating the servers with LoadRunner Cloud’s dashboards. The added feature will enable us to see exactly what is happening in the servers at a particular time.
We're interested in leveraging the scriptless automation capabilities available in several tools. Some of our less technically inclined manual QA testers find them insufficient. They crave drag-and-drop functionalities or more intuitive scriptless automation options. Scriptless automation is an area that can be improved.
Cloud Manager at a financial services firm with 5,001-10,000 employees
Real User
Top 20
2023-10-27T20:21:00Z
Oct 27, 2023
Initially, there were a couple of things, but they got resolved. When they released it three years back, they were not supporting multifactor authentication. We use Okta. In my business unit, we are using Okta integration or authentication. They were not supporting that earlier, but we requested them, and they implemented it. At this time, I do not see anything that they need to improve in existing features. In terms of new features, they can natively integrate with Chaos engineering tools such as Chaos Monkey and AWS FIS. With LoadRunner, we can generate load, and if Chaos tools are also supported natively, it will help to get everything together.
Performance & Analytics Team Lead at a financial services firm with 10,001+ employees
Real User
Top 20
2023-10-20T20:08:00Z
Oct 20, 2023
While evaluating tools and saying that this is the tool we are going to go with, one of the biggest challenges that I faced was related to our previous experience with Micro Focus. LoadRunner and Silk Performer used to be under Micro Focus before OpenText bought it. Towards the end, Micro Focus did a big money grab where they went around and harshly audited all the companies. My company got hit with millions of dollars for the software that should have been removed. Even though no one was using it, we still got hit. It left a horrible taste and a horrible reputation for Micro Focus at my company. I know OpenText is a different company, but OpenText needs to somehow address and show former Micro Focus clients or LoadRunner clients, Silk Performer clients, and ALM clients that they are not the same company. I come to this conference and I get this message, but I have to try and sell that to my manager. The manager who got hit hard with the charge is not going to believe me. He still has that bad taste. They are influential people, and they remember that, so OpenText somehow has to overcome that. Their documentation is not technical enough for us. We would like to have much deeper technical documentation so that we can self-serve without constantly having to go back to them and ask.
Senior Quality & Test Architect at a government with 1,001-5,000 employees
Real User
Top 5
2023-03-09T22:01:10Z
Mar 9, 2023
Enterprise is the next level up for professionals. But if you have the cloud version, you are almost there. Because that was the way it used to be. They didn't previously have the cloud version. You had LoadRunner Enterprise, and you had LoadRunner Professional. Since the cloud has become more and more important, they have now expanded that, and they have now the cloud version as well. We use both. There are always areas that can be improved. One area of improvement in the software's support is the replaying of captured data within the development environment. It would be beneficial if the replay feature could accurately mimic what the actual application is doing for better analysis and testing. As web technologies have evolved rapidly in the past few years, the software has included a runtime viewer to debug scripts along with other logging features. However, the additional logging can be expensive, and the runtime viewer needs to be updated to better support newer web technologies. The logging feature itself is not problematic, but the discrepancy lies in the outdated runtime viewer's inability to effectively support newer web 2.0 technologies, leading to a less visually appealing and potentially less informative display. Despite this deficiency, users can still access all the necessary logging information and tailor it to their needs. In summary, although the logging feature in LoadRunner Cloud is useful, there is a discrepancy in the runtime viewer when dealing with newer web 2.0 technologies. However, you can still access all the necessary logs and set them according to your needs. The main issue is the lack of ease of use in the runtime viewer, which needs to be modernized to better support newer technologies. There is a reporting component of the cloud that could be improved, but it could simply be different from what I'm used to. I'm more accustomed to using the analysis program included with the on-premise software, whether LoadRunner Enterprise or LoadRunner Professional. The analysis engine, one of three major components of the entire software package, examines the data collected by the load test, or performance test, I am not familiar with what it is referred to as and produces a variety of reports. They do that on the cloud as well, but I'm not sure if it's as detailed, and we may not have as much control over what you want. They do that on the cloud as well, but I don't know if it's as and it seems to be pretty detailed, but we maybe not have such much control as to what you want to get, but I think it's still more than adequate in the in most cases.
Head -Consulting and Delivery at Avekshaa Technologies
Real User
Top 20
2023-02-06T06:19:00Z
Feb 6, 2023
There are two features that I would want MicroFocus to work on. 1. It should have a feature to report with a 99.9 percentile success rate. 2. We should be creating a performance dashboard with InfluxDB OR ElasticSearch integration with Micro Focus Cloud LR. We need Micro Focus Cloud LR to send feeds to InfluxDB OR ElasticSearch for each run for: 1) Realtime PT result publishing 2) Trending between runs 3) Data mashing with Grafana/Chronograf 4) Live alerting The feed from the Micro Focus Cloud LR instance to InfluxDB should be configured, or an integration touch point should be available for sending real-time feeds into the DBs.
Sr Consultant at a computer software company with 501-1,000 employees
Consultant
2022-08-11T09:23:38Z
Aug 11, 2022
While executing the test it automatically refreshes and I need to see the downwards component, however, it's not allowing me to see that. since it's automatically refreshing, I'm unable to see all the lines in that tool that we are seeing in the editor window. We did have some challenges with the initial implementation.
Senior Manager - Performance Architect at Publicis Sapient
Real User
Top 20
2022-08-08T12:32:00Z
Aug 8, 2022
If you get a raw file on a standalone instance, you are on your own to splice and dice the results. I want to see errors versus response time, and I want to see how throughput was performing when there was a spike in error or response time at a certain period of time. These type of options are not available on LoadRunner Cloud, and they would make the user's life easier and would help him drill down to the exact time. In the next release, it would be nice to have more coverage in terms of load generators. Then, you would be able to drill down on the raw research and analyze more in terms of response time spikes or errors. CI/CD integration could be a little bit better. When there's a test and if you see that there are high response times in the test itself, it would be great to be able to send an alert. It would give a heads-up to the architect community or ops community.
Reporting and analysis need improvement. Compared to the old school LoadRunner Windows application, the reporting and analysis are mediocre in LoadRunner Cloud.
Sr. Technical Test Analyst at a educational organization with 1,001-5,000 employees
Real User
2021-06-14T17:04:00Z
Jun 14, 2021
When it comes to the CI pipeline, there were some limitations initially, but the latest version of LoadRunner is very helpful. They can integrate into the CI/CD pipeline. We are trying to put it into a complete CI/CD pipeline, but there are still some challenges when you try to run it through different protocols. The challenges are around how you can containerize applications. There are some limitations to some protocols, such as desktop. And when it comes to database testing, there are some things that we can't do through CI/CD. For CI/CD, the previous versions may not be the right ones, but the latest version is definitely a step ahead. We are aiming for 100 percent, but we have achieved around 60 to 70 percent in CI/CD. Still, it's very good to have that capability. Also, it would be helpful if Loadrunner Cloud had the same kind of enterprise environment where we had multiple models and options while creating the load profile. Not all the options are available in the LoadRunner Cloud. If they could be added, it would be good.
The performance has really improved in terms of running test cycles. The product used to crash on-premises and when it had a lot of trades being pumped in. Because it is memory intensive, it used to crash if it was running out of memory. That was the limitation of the on-premises thing. Running those states and cycles in the cloud is much faster. Everything is frozen in the cloud. The RAM, CPU, compute, and storage are provisioned in the cloud, which is becoming easier for running these test cycles. Test cycles are highly effective. Of course, you need to have a test strategy, like volume-based load testing. Configure some test cases and run those test cases in cycles. Cloud performance is much faster. Volume-based endurance testing is easier in the cloud.
There are three modules in the system that are different products packaged into one, and they can sometimes be difficult to figure out, so they should be better integrated with each other. The current support model is something that can be improved.
Do your performance and load testing in the cloud. OpenText LoadRunner Cloud makes it easy to plan, run, and scale performance tests without the need to deploy and manage infrastructure.
The main difference is the interface; the look and feel have changed, but the background setup and configuration remain the same. The project admin team had already set up LoadRunner Cloud in our environment. Since AI plays a major role in today's world, many tools are expected to integrate with it. If LoadRunner has AI integration, that would be a great feature. In past projects, including those with LoadRunner and NeoLoad, clients often asked about integrating CI/CD pipelines, such as using Jenkins to automate the triggering process. I’ve done POCs on this, and it’s possible. Once set up, the pipeline can automatically execute tests without manual intervention.
Our company provides actionable intelligence to the development community and to DevOps. The solution has helped optimize the application in terms of the code base and tuning the environment. Some improvements can be made in the solution's user interface and BLS. The vendor adopts new technologies quite late.
One area for improvement in LoadRunner Cloud, especially for agile models, is its limited support for functional testing alongside its robust non-functional testing capabilities. Unlike some other tools in the market that offer both functional and non-functional testing within a single platform, LoadRunner requires separate test scripts for each, doubling the testing effort.
The product must provide agents to monitor servers. If we want to monitor our servers, we should be able to do it by integrating the servers with LoadRunner Cloud’s dashboards. The added feature will enable us to see exactly what is happening in the servers at a particular time.
We're interested in leveraging the scriptless automation capabilities available in several tools. Some of our less technically inclined manual QA testers find them insufficient. They crave drag-and-drop functionalities or more intuitive scriptless automation options. Scriptless automation is an area that can be improved.
Initially, there were a couple of things, but they got resolved. When they released it three years back, they were not supporting multifactor authentication. We use Okta. In my business unit, we are using Okta integration or authentication. They were not supporting that earlier, but we requested them, and they implemented it. At this time, I do not see anything that they need to improve in existing features. In terms of new features, they can natively integrate with Chaos engineering tools such as Chaos Monkey and AWS FIS. With LoadRunner, we can generate load, and if Chaos tools are also supported natively, it will help to get everything together.
While evaluating tools and saying that this is the tool we are going to go with, one of the biggest challenges that I faced was related to our previous experience with Micro Focus. LoadRunner and Silk Performer used to be under Micro Focus before OpenText bought it. Towards the end, Micro Focus did a big money grab where they went around and harshly audited all the companies. My company got hit with millions of dollars for the software that should have been removed. Even though no one was using it, we still got hit. It left a horrible taste and a horrible reputation for Micro Focus at my company. I know OpenText is a different company, but OpenText needs to somehow address and show former Micro Focus clients or LoadRunner clients, Silk Performer clients, and ALM clients that they are not the same company. I come to this conference and I get this message, but I have to try and sell that to my manager. The manager who got hit hard with the charge is not going to believe me. He still has that bad taste. They are influential people, and they remember that, so OpenText somehow has to overcome that. Their documentation is not technical enough for us. We would like to have much deeper technical documentation so that we can self-serve without constantly having to go back to them and ask.
We encounter hurdles while running the professional version for on-premise setup. They should work on this particular area of improvement.
They should minimize the use of coding for the solution. Also, they should include features to import other scripts without rewriting them.
Enterprise is the next level up for professionals. But if you have the cloud version, you are almost there. Because that was the way it used to be. They didn't previously have the cloud version. You had LoadRunner Enterprise, and you had LoadRunner Professional. Since the cloud has become more and more important, they have now expanded that, and they have now the cloud version as well. We use both. There are always areas that can be improved. One area of improvement in the software's support is the replaying of captured data within the development environment. It would be beneficial if the replay feature could accurately mimic what the actual application is doing for better analysis and testing. As web technologies have evolved rapidly in the past few years, the software has included a runtime viewer to debug scripts along with other logging features. However, the additional logging can be expensive, and the runtime viewer needs to be updated to better support newer web technologies. The logging feature itself is not problematic, but the discrepancy lies in the outdated runtime viewer's inability to effectively support newer web 2.0 technologies, leading to a less visually appealing and potentially less informative display. Despite this deficiency, users can still access all the necessary logging information and tailor it to their needs. In summary, although the logging feature in LoadRunner Cloud is useful, there is a discrepancy in the runtime viewer when dealing with newer web 2.0 technologies. However, you can still access all the necessary logs and set them according to your needs. The main issue is the lack of ease of use in the runtime viewer, which needs to be modernized to better support newer technologies. There is a reporting component of the cloud that could be improved, but it could simply be different from what I'm used to. I'm more accustomed to using the analysis program included with the on-premise software, whether LoadRunner Enterprise or LoadRunner Professional. The analysis engine, one of three major components of the entire software package, examines the data collected by the load test, or performance test, I am not familiar with what it is referred to as and produces a variety of reports. They do that on the cloud as well, but I'm not sure if it's as detailed, and we may not have as much control over what you want. They do that on the cloud as well, but I don't know if it's as and it seems to be pretty detailed, but we maybe not have such much control as to what you want to get, but I think it's still more than adequate in the in most cases.
There are two features that I would want MicroFocus to work on. 1. It should have a feature to report with a 99.9 percentile success rate. 2. We should be creating a performance dashboard with InfluxDB OR ElasticSearch integration with Micro Focus Cloud LR. We need Micro Focus Cloud LR to send feeds to InfluxDB OR ElasticSearch for each run for: 1) Realtime PT result publishing 2) Trending between runs 3) Data mashing with Grafana/Chronograf 4) Live alerting The feed from the Micro Focus Cloud LR instance to InfluxDB should be configured, or an integration touch point should be available for sending real-time feeds into the DBs.
The product price could be more affordable compared to similar products.
While executing the test it automatically refreshes and I need to see the downwards component, however, it's not allowing me to see that. since it's automatically refreshing, I'm unable to see all the lines in that tool that we are seeing in the editor window. We did have some challenges with the initial implementation.
If you get a raw file on a standalone instance, you are on your own to splice and dice the results. I want to see errors versus response time, and I want to see how throughput was performing when there was a spike in error or response time at a certain period of time. These type of options are not available on LoadRunner Cloud, and they would make the user's life easier and would help him drill down to the exact time. In the next release, it would be nice to have more coverage in terms of load generators. Then, you would be able to drill down on the raw research and analyze more in terms of response time spikes or errors. CI/CD integration could be a little bit better. When there's a test and if you see that there are high response times in the test itself, it would be great to be able to send an alert. It would give a heads-up to the architect community or ops community.
Reporting and analysis need improvement. Compared to the old school LoadRunner Windows application, the reporting and analysis are mediocre in LoadRunner Cloud.
When it comes to the CI pipeline, there were some limitations initially, but the latest version of LoadRunner is very helpful. They can integrate into the CI/CD pipeline. We are trying to put it into a complete CI/CD pipeline, but there are still some challenges when you try to run it through different protocols. The challenges are around how you can containerize applications. There are some limitations to some protocols, such as desktop. And when it comes to database testing, there are some things that we can't do through CI/CD. For CI/CD, the previous versions may not be the right ones, but the latest version is definitely a step ahead. We are aiming for 100 percent, but we have achieved around 60 to 70 percent in CI/CD. Still, it's very good to have that capability. Also, it would be helpful if Loadrunner Cloud had the same kind of enterprise environment where we had multiple models and options while creating the load profile. Not all the options are available in the LoadRunner Cloud. If they could be added, it would be good.
The performance has really improved in terms of running test cycles. The product used to crash on-premises and when it had a lot of trades being pumped in. Because it is memory intensive, it used to crash if it was running out of memory. That was the limitation of the on-premises thing. Running those states and cycles in the cloud is much faster. Everything is frozen in the cloud. The RAM, CPU, compute, and storage are provisioned in the cloud, which is becoming easier for running these test cycles. Test cycles are highly effective. Of course, you need to have a test strategy, like volume-based load testing. Configure some test cases and run those test cases in cycles. Cloud performance is much faster. Volume-based endurance testing is easier in the cloud.
There are three modules in the system that are different products packaged into one, and they can sometimes be difficult to figure out, so they should be better integrated with each other. The current support model is something that can be improved.
I don't know of any features that should be added. The solution isn't lacking anything at this point. We're happy with what is on offer.
Improvements to the reporting would be good. Technical support can be improved.