The scalability features still need improvement. They have recently added dynamic user features, so we should evaluate that, which may enhance scalability. Storage capacity should be increased. There is a shared file repository with a limit of 999 file storage along with each payload, which is a maximum of fifty MB. That should be increased. When we run JMeter scripts in BlazeMeter, the BlazeMeter user interface does not recognize the property files we use in JMeter. This needs to be addressed.
The pricing is high because the advanced version comes with different features. Unlike JMeter, which offers a free version, BlazeMeter requires you to upgrade to a higher tier to access those advanced features. BlazeMeter comes with a cost, so our client must agree. We've settled for the basic plan, which limited our ability to explore the higher versions. This is one drawback, especially since JMeter already provides many features that are out of the box. However, a positive aspect of BlazeMeter is that it offers various options for capturing different formats, such as JSON, XML, etc.
BlazeMeter does not provide integration with the Aternity tool. The solution has the capability to work with Dynatrace, AppDynamics, and New Relic. However, it does not have the option to integrate with Aternity.
Sometimes, when we execute tests, the results calculated by BlazeMeter, specifically the response times for failed transactions, are incorrect. We've already reported this issue. If this could be fixed, BlazeMeter would be a much better tool compared to LoadRunner. Currently, it incorrectly calculates response times for failed transactions, it provides data that isn't useful. We have to manually aggregate the data to get accurate values. In future releases, I'd like to see BlazeMeter integrate with mobile applications and allow testing on real devices. By testing on real devices, we could gather metrics related to CPU usage, memory, and battery consumption. This would give us a better understanding of how the application performs on actual devices and help us ensure there are no battery drain issues, high internet usage, or excessive CPU or memory usage. This would allow us to confidently certify that the application is optimized for real-world device performance.
Senior Manager at 360logica Software Testing Services
Real User
Top 20
2024-02-21T16:11:23Z
Feb 21, 2024
An area for improvement could be enhancing BlazeMeter's integration with automation scripts. It would be beneficial if BlazeMeter could support automation frameworks more effectively, including the use of Selenium scripts for both manual and automated load testing. Integration is one of the things lacking in BlazeMeter compared to some newer options. A lot of products are coming out, and BlazeMeter pricing is a factor. For example, LoadStorm by Neustar is integrated with built-in APMs. It won't capture all server stats, but it will collect the minimum important aspects – CPU consumption, utilization rate, and how much a single server is being stressed. If BlazeMeter offered similar functionality, it would be fantastic.
Integration with APM tools is an area where the product has certain shortcomings and needs improvement. Integration with APM tools like Dynatrace or AppDynamics needs to be improved.
Whenever we use BlazeMeter for the ramp-up and designing the scenarios in our company, we also use JMeter or other load testing tools, which provide some convenience in areas where the granularity can be maintained in seconds. The ramp-up and ramp-down require our company to use the granularity for a few minutes, making it an area where improvements are required to be able to use the granularity in seconds. From a performance perspective, BlazeMeter needs to be improved. Whenever we discuss the development stage, JMeter has plug-ins and other extensions in the area of WebSockets, and it is the same case in terms of the kind of extensions provided by JMeter that are available in LoadRunner. BlazeMeter has not found the extensions for WebSockets or Java Applet. Decoding the scripts that contain the applications with Java Applet is not possible with BlazeMeter or even with JMeter, and it includes some Oracle and desktop applications, too.
Director of Quality Engineering at PAR Technology Corp
Real User
Top 10
2023-11-24T07:14:57Z
Nov 24, 2023
The tool fails to offer better parameterization to allow it to run the same script across different environments, making it a feature that needs a little improvement. The tool should offer some ease of use across environments. The solution's scalability is an area of concern where improvements are required.
Potential areas for improvement could include pricing, configuration, setup, and addressing certain limitations. Enhancements in data import/export and integration with other tools could be beneficial. Additionally, providing support for certain tools like Grafana, which some competitors offer, would be a good extension to consider.
BlazeMeter is a very handy tool requiring drag and drop to operate., but I don't think I can generate a JMX file unless I run JMeter, which is one of my concerns when it comes to BlazeMeter. In our company, we are mostly unable to capture logs or events with BlazeMeter. We want BlazeMeter to assimilate a mobile app, especially sincere company deals in mobile apps, and we wish to conduct testing using BlazeMeter. The solution has been good so far, but JMeter is one area that has been tricky for me since I cannot generate events. I cannot speak about a particular weakness in the tool, but it is a tricky product since those who want to use it need to depend on another tool called JMeter. JMeter is required to get the scripts and JMX file before being able to run on BlazeMeter. In our company, an APK is generated whenever we develop mobile apps, and when I drag and drop it as a script, a JMX file should be generated, which is a feature not included in the solution. The aforementioned area where the solution lacks can be considered for improvement.
VP QA Performance Engineer at a financial services firm with 1,001-5,000 employees
Real User
Top 20
2023-05-25T18:49:00Z
May 25, 2023
BlazeMeter has room for improvement in terms of its integration with GitLab, particularly in the context of CI/CD processes. While it has multiple integrations available, the level of integration with GitLab may need further enhancements. It is known to work well with Git and Jenkins, although the extent of compatibility with GitLab is uncertain.
Senior Product Owner at a financial services firm with 10,001+ employees
Real User
2022-09-18T21:05:00Z
Sep 18, 2022
The seamless integration with mobiles could be improved. Right now, they have the UI testing capability, which provides the browsers. They made Safari available, which is amazing. We're able to test on Chrome, Firefox, and Safari. We want the capability to test Chrome on Windows, Chrome on Mac OS, and the capability to test Chrome on Android OS and iOS.
Service Virtualization Developer at Tata Consultancy Services
Real User
2022-07-24T23:48:00Z
Jul 24, 2022
One problem, while we are executing a test, is that it will take some time to download data. Let's say I'm performance testing with a high-end load configuration. It takes a minimum of three minutes or so to start the test itself. That's the bad part of the performance testing. I don't think they can reduce that time because that's the functionality they have implemented in our BlazeMeter performance testing. But it's a pain point whenever we are running performance testing in a call or a demo, as well as in our live testing when all the business people are there. The first time I run a given test, if it takes three minutes to download onto my server that's understandable. But every time I rerun the same test, it is downloaded again, because once the test is completed the files that were downloaded are removed. That means I have to wait for three to four minutes again. We also had a call last week regarding secret keys. In JMX we have some Backend Listeners, such as Kibana, and there are usernames and passwords for them that we have to manually enter. When we upload the JMX file into BlazeMeter for performance testing, the usernames and passwords are viewable. Anyone who has access to BlazeMeter can download the JMX file and the usernames and passwords are visible to all those people. That's an issue with the performance testing. Also, all the competitors have MQ protocol support, which is lacking in BlazeMeter's Mock Services. Having MQ protocol support in the Mock Services would be great for us. JDBC, the database communication, is also lacking. If we had those things, we would be completely satisfied with BlazeMeter's Mock Services. And for the API monitoring, we are missing a data-driven approach. If, for a single API call, we have 50 to 100 test cases, there should be no need for us to create multiple steps or to duplicate the test steps. Instead, if we had a data-driven approach available, we could directly add the test data into an Excel sheet and call it into the single test steps and achieve what we need to do. We have raised this concern to the Perforce team as well, and they said they are working on it.
Mobile Network Automation Architect at BT - British Telecom
MSP
2022-07-17T17:19:00Z
Jul 17, 2022
Overall, it's helped our ability to address test data challenges. The test data features on their own are very good, but version control test data isn't included yet. I think that's an area for improvement. We can update the test data on the cloud. That's a good feature. There's also test data management, which is good. Runscope doesn't have the test data management yet. Mock services do, and performance testing has it. We can do the same test through JMeter, validating the same criteria, but the feedback from Runscope is quite visible. We can see the request and the response, what data comes back, and add the validation criteria. We can manage the test environments and test data, but running the same API request for multiple test data is missing. We cloned the test cases multiple times to run it. They need to work on that. Version controlling of the test cases and the information, the ability to compare the current version and the previous version within Runscope would be really nice. The history shows who made the changes, but it doesn't compare the changes. In the future, I would like to see integrations with GitLab and external Git reports so we could have some sort of version control outside as well. There is no current mechanism for that. The ability to have direct imports of spoken API specifications instead of converting them to JSON would be nice. There are some features they could work on.
The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups.
Documentation for the solution could be improved because there are some areas, such as licensing costs, where there is a lack of information regarding structure. I'd also like the ability to see a comparison feature after carrying out several tests. I'd like to know the difference in terms of response times and other details. That would be a great feature for them to provide. Sometimes we'd like to include additional users during a test run to check application sustainability. We can do it from a scripting end but it would be great if BlazeMeter would provide the option of adding a few more users while running a test.
I cannot recall coming across any missing features. The reporting capabilities could be improved. It would be ideal if it could incorporate tools such as APM or Dynatrace.
We have already provided some sort of feedback for our BlazeMeter vendor who is directly interacting with us. We would like, for example, for there to be some sort of grouping features available. The should be some visibility into load testing. I'd like to capture items via snapshots. While they are in the cloud, it would be good to also offer on-premises options.
Head of IT Enterprise Architecture at a transportation company with 1,001-5,000 employees
Real User
2019-05-12T18:32:00Z
May 12, 2019
A possible improvement could be the integration with APM tools. Different plugins for the most common APM tools could help to reconcile traffic charges with application behavior. For example, a root cause analysis of bottlenecks or analysis of non-linear behavior in applications.
My only complaint is about the technical support, see further details below regarding customer support. It would improve the product if their Chrome extension allowed you to modify the JMeter settings.
More runs per month for the basic plan would be useful. Also, being able to stream logs from AWS and integrate them with the test reports would be excellent.
BlazeMeter ensures delivery of high-performance software by enabling DevOps teams to quickly and easily run open-source-based performance tests against any mobile app, website or API at massive scale to validate performance at every stage of software delivery.
The rapidly growing BlazeMeter community has more than 100,000 developers and includes prominent global brands such as Adobe, Atlassian, Gap, NBC Universal, Pfizer and Walmart as customers. Founded in 2011, the company is headquartered...
The scalability features still need improvement. They have recently added dynamic user features, so we should evaluate that, which may enhance scalability. Storage capacity should be increased. There is a shared file repository with a limit of 999 file storage along with each payload, which is a maximum of fifty MB. That should be increased. When we run JMeter scripts in BlazeMeter, the BlazeMeter user interface does not recognize the property files we use in JMeter. This needs to be addressed.
The pricing is high because the advanced version comes with different features. Unlike JMeter, which offers a free version, BlazeMeter requires you to upgrade to a higher tier to access those advanced features. BlazeMeter comes with a cost, so our client must agree. We've settled for the basic plan, which limited our ability to explore the higher versions. This is one drawback, especially since JMeter already provides many features that are out of the box. However, a positive aspect of BlazeMeter is that it offers various options for capturing different formats, such as JSON, XML, etc.
BlazeMeter does not provide integration with the Aternity tool. The solution has the capability to work with Dynatrace, AppDynamics, and New Relic. However, it does not have the option to integrate with Aternity.
The product could improve in areas such as mobile testing and the integration of AI analytics.
Sometimes, when we execute tests, the results calculated by BlazeMeter, specifically the response times for failed transactions, are incorrect. We've already reported this issue. If this could be fixed, BlazeMeter would be a much better tool compared to LoadRunner. Currently, it incorrectly calculates response times for failed transactions, it provides data that isn't useful. We have to manually aggregate the data to get accurate values. In future releases, I'd like to see BlazeMeter integrate with mobile applications and allow testing on real devices. By testing on real devices, we could gather metrics related to CPU usage, memory, and battery consumption. This would give us a better understanding of how the application performs on actual devices and help us ensure there are no battery drain issues, high internet usage, or excessive CPU or memory usage. This would allow us to confidently certify that the application is optimized for real-world device performance.
The scanning capability needs improvement.
An area for improvement could be enhancing BlazeMeter's integration with automation scripts. It would be beneficial if BlazeMeter could support automation frameworks more effectively, including the use of Selenium scripts for both manual and automated load testing. Integration is one of the things lacking in BlazeMeter compared to some newer options. A lot of products are coming out, and BlazeMeter pricing is a factor. For example, LoadStorm by Neustar is integrated with built-in APMs. It won't capture all server stats, but it will collect the minimum important aspects – CPU consumption, utilization rate, and how much a single server is being stressed. If BlazeMeter offered similar functionality, it would be fantastic.
Integration with APM tools is an area where the product has certain shortcomings and needs improvement. Integration with APM tools like Dynatrace or AppDynamics needs to be improved.
We sometimes experience downtime, but not so frequently.
Whenever we use BlazeMeter for the ramp-up and designing the scenarios in our company, we also use JMeter or other load testing tools, which provide some convenience in areas where the granularity can be maintained in seconds. The ramp-up and ramp-down require our company to use the granularity for a few minutes, making it an area where improvements are required to be able to use the granularity in seconds. From a performance perspective, BlazeMeter needs to be improved. Whenever we discuss the development stage, JMeter has plug-ins and other extensions in the area of WebSockets, and it is the same case in terms of the kind of extensions provided by JMeter that are available in LoadRunner. BlazeMeter has not found the extensions for WebSockets or Java Applet. Decoding the scripts that contain the applications with Java Applet is not possible with BlazeMeter or even with JMeter, and it includes some Oracle and desktop applications, too.
The tool fails to offer better parameterization to allow it to run the same script across different environments, making it a feature that needs a little improvement. The tool should offer some ease of use across environments. The solution's scalability is an area of concern where improvements are required.
The only downside of BlazeMeter is that it is a bit expensive.
Potential areas for improvement could include pricing, configuration, setup, and addressing certain limitations. Enhancements in data import/export and integration with other tools could be beneficial. Additionally, providing support for certain tools like Grafana, which some competitors offer, would be a good extension to consider.
BlazeMeter is a very handy tool requiring drag and drop to operate., but I don't think I can generate a JMX file unless I run JMeter, which is one of my concerns when it comes to BlazeMeter. In our company, we are mostly unable to capture logs or events with BlazeMeter. We want BlazeMeter to assimilate a mobile app, especially sincere company deals in mobile apps, and we wish to conduct testing using BlazeMeter. The solution has been good so far, but JMeter is one area that has been tricky for me since I cannot generate events. I cannot speak about a particular weakness in the tool, but it is a tricky product since those who want to use it need to depend on another tool called JMeter. JMeter is required to get the scripts and JMX file before being able to run on BlazeMeter. In our company, an APK is generated whenever we develop mobile apps, and when I drag and drop it as a script, a JMX file should be generated, which is a feature not included in the solution. The aforementioned area where the solution lacks can be considered for improvement.
BlazeMeter has room for improvement in terms of its integration with GitLab, particularly in the context of CI/CD processes. While it has multiple integrations available, the level of integration with GitLab may need further enhancements. It is known to work well with Git and Jenkins, although the extent of compatibility with GitLab is uncertain.
The seamless integration with mobiles could be improved. Right now, they have the UI testing capability, which provides the browsers. They made Safari available, which is amazing. We're able to test on Chrome, Firefox, and Safari. We want the capability to test Chrome on Windows, Chrome on Mac OS, and the capability to test Chrome on Android OS and iOS.
I believe that data management and test server virtualization are things that Perforce is working on, or should be working on.
One problem, while we are executing a test, is that it will take some time to download data. Let's say I'm performance testing with a high-end load configuration. It takes a minimum of three minutes or so to start the test itself. That's the bad part of the performance testing. I don't think they can reduce that time because that's the functionality they have implemented in our BlazeMeter performance testing. But it's a pain point whenever we are running performance testing in a call or a demo, as well as in our live testing when all the business people are there. The first time I run a given test, if it takes three minutes to download onto my server that's understandable. But every time I rerun the same test, it is downloaded again, because once the test is completed the files that were downloaded are removed. That means I have to wait for three to four minutes again. We also had a call last week regarding secret keys. In JMX we have some Backend Listeners, such as Kibana, and there are usernames and passwords for them that we have to manually enter. When we upload the JMX file into BlazeMeter for performance testing, the usernames and passwords are viewable. Anyone who has access to BlazeMeter can download the JMX file and the usernames and passwords are visible to all those people. That's an issue with the performance testing. Also, all the competitors have MQ protocol support, which is lacking in BlazeMeter's Mock Services. Having MQ protocol support in the Mock Services would be great for us. JDBC, the database communication, is also lacking. If we had those things, we would be completely satisfied with BlazeMeter's Mock Services. And for the API monitoring, we are missing a data-driven approach. If, for a single API call, we have 50 to 100 test cases, there should be no need for us to create multiple steps or to duplicate the test steps. Instead, if we had a data-driven approach available, we could directly add the test data into an Excel sheet and call it into the single test steps and achieve what we need to do. We have raised this concern to the Perforce team as well, and they said they are working on it.
Overall, it's helped our ability to address test data challenges. The test data features on their own are very good, but version control test data isn't included yet. I think that's an area for improvement. We can update the test data on the cloud. That's a good feature. There's also test data management, which is good. Runscope doesn't have the test data management yet. Mock services do, and performance testing has it. We can do the same test through JMeter, validating the same criteria, but the feedback from Runscope is quite visible. We can see the request and the response, what data comes back, and add the validation criteria. We can manage the test environments and test data, but running the same API request for multiple test data is missing. We cloned the test cases multiple times to run it. They need to work on that. Version controlling of the test cases and the information, the ability to compare the current version and the previous version within Runscope would be really nice. The history shows who made the changes, but it doesn't compare the changes. In the future, I would like to see integrations with GitLab and external Git reports so we could have some sort of version control outside as well. There is no current mechanism for that. The ability to have direct imports of spoken API specifications instead of converting them to JSON would be nice. There are some features they could work on.
The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups.
If the solution had better support and the documentation was efficient it would do better in the market.
Documentation for the solution could be improved because there are some areas, such as licensing costs, where there is a lack of information regarding structure. I'd also like the ability to see a comparison feature after carrying out several tests. I'd like to know the difference in terms of response times and other details. That would be a great feature for them to provide. Sometimes we'd like to include additional users during a test run to check application sustainability. We can do it from a scripting end but it would be great if BlazeMeter would provide the option of adding a few more users while running a test.
I cannot recall coming across any missing features. The reporting capabilities could be improved. It would be ideal if it could incorporate tools such as APM or Dynatrace.
We have already provided some sort of feedback for our BlazeMeter vendor who is directly interacting with us. We would like, for example, for there to be some sort of grouping features available. The should be some visibility into load testing. I'd like to capture items via snapshots. While they are in the cloud, it would be good to also offer on-premises options.
Having more options for customization would be helpful.
In terms of improvement, I would like it to have the ability to customize reports.
A possible improvement could be the integration with APM tools. Different plugins for the most common APM tools could help to reconcile traffic charges with application behavior. For example, a root cause analysis of bottlenecks or analysis of non-linear behavior in applications.
My only complaint is about the technical support, see further details below regarding customer support. It would improve the product if their Chrome extension allowed you to modify the JMeter settings.
Reporting.
More runs per month for the basic plan would be useful. Also, being able to stream logs from AWS and integrate them with the test reports would be excellent.