After researching various performance testing tools, I can confidently say that the top two are Apache JMeter and Micro Focus LoadRunner.
For starters, Apache JMetersupports a 100% Java-scripted desktop application as well as testing functional performance of web applications. I think it's one of the best free load-testing tools for developers, and that is probably why it is so popular. What’s nice about it is that it is available in an open-source format, which makes it highly accessible to software businesses of all sizes. And despite being open-source, it is extremely versatile.
Some of the solution’s greatest aspects are that it has multiple load injectors handled by a single controller, it does not require load testing infrastructure, and it has a user-friendly UI which means it requires less programming than other API performance testing tools. Whatsmore is, it supports all Java apps and it is compatible with several web and networking protocols such as HTTP, HTTPS, FTP, LDAP, SOAP, and TCP. It also offers compliance with JDBC and message-oriented middleware (MOM) through JMS because it has a Java-oriented system.
In addition, it has simple graphs and charts that provide vital load-based statistics and resource utilization monitoring. And the option of using shell scripts and native commands for testing procedures makes it easier to implement.
Apache JMeter also allows you to test applications for both dynamic and static resources. Therefore, users can utilize resources such as servers, logs, queries, scripts, and files during testing. Testers can simultaneously also inspect applications under heavy load and evaluate their performance and robustness against different load types.
The solution has a lot to offer. Some of the features that I have found to be most valuable include:
It supports multiple load injectors managed by a single controller
It is highly portable and supports all the Java-based apps
It involves fewer scripting efforts as compared to other tools
It has simple charts and graphs that are helpful for analyzing key load-related statistics and resource usage monitors
It supports integrated real time, Tomcat collectors for Monitoring
I also strongly recommend Micro Focus LoadRunner, which I consider to be a very sophisticated solution. One of its most powerful features is that it can create and handle thousands of users at the same time. The solution enables you to gather all the required information with respect to performance and is also based on infrastructure.
What’s good about Micro Focus LoadRunner is that it enables software testers to have complete visibility over their system’s end-to-end performance. It ]specializes in detecting bottlenecks before the application enters the implementation phase so users can evaluate each component individually before it actually goes live.
I have found it to be helpful in detecting performance gaps before the implementation or upgrade of a new system. It provides users with highly advanced predictive features to forecast expenses for up-scaling application capacity. Due to accurate predictions of expenses related to hardware and software, it is easier to improve the capacity and scalability of your application.
Some of its biggest advantages are:
It reduces cost of application downtime that stem from performance issues
It allows performance testing of existing legacy applications with new technologies
It enables testers to test mobile applications
It gives users shorter test cycles to expedite application delivery
In summary, both Apache JMeter and Micro Focus LoadRunner are very good performance testing tools. No matter which one you choose, you will not be disappointed.
Performance and Fault-tolerance Architect with 1,001-5,000 employees
Real User
Top 5
2022-05-11T16:47:24Z
May 11, 2022
@David Collier performance testing is basically to emulate your hardware and simulate your software needs as per your need considering various parameters like user loads (both concurrent/ simultaneous), times, geo locations, internal (as well as external) loads, protocols, policies, measurements at all 7 OSI layers, level of proactive monitoring and alerts to various thresholds, SLA/ola s, scalability, capacity and budget available.
@Ravi Suvvari I think this is a perfect example of understanding "what we mean by performance testing" - I accept in principle what you are saying. But I cannot agree that "performance testing" is "emulating your hardware" and "simulating your software" as the assumption is that you are approximating Performance Testing to some arbitrary preconceived nirvana of what you hope performance will be.
What about all the other types of testing that can be thought of as subsets of performance testing such as isolation testing, failover testing and many others?
Sorry Ravvi, I can't totally agree with your definition - it's one definition, but not THE definition.
As I said it's about understanding what performance testing means in the context of the questionner and it means many different things to different people.
Dave
I'm afraid the question is rather too generic, but I'll try and provide some pointers.
First we need to understand what you mean by "performance testing". To a network manager, they are interested in how quickly, efficiently and securely a packet of data travels from point A to point B on a network. To an application manager, they are interested in how quickly, efficiently and securely a person can complete a unit of work using a given application. To a DBA, it's how quickly, efficiently and securely data can be accessed and stored. To a server manager they're interested in how quickly, efficiently and securely data can be processed.
Possibly more relevant in this instance is the needs of the application developer / application tester. To a developer, they are interested in understanding how their application will perform under various loads. An application is likely to perform differently under different loads. For instance, a web application with 2 users is going to behave differently than if it has 2 million users. In fact, I'd go as far as to say that understanding your potential user base should influence server, network, database design in the first place.
Effectively, they all want to optimise speed, efficiency and security - and also COST in order to deliver the best outcome for the business they are supporting.
From this common set of requirements their needs diverge rapidly into a massive variety of metrics that need to be collected.
As you can see it's a VAST area for discussion. With such a variety of requirements it's probably impossible to name a single product that can meet ALL requirements of ALL interested parties.
This very site has a good report https://www.peerspot.com/landi... that outlines application performance testing solutions. These range from free, open source solutions all the way through to enterprise class solutions.
As an ex-Micro Focus employee, I can only comment on LoadRunner. It's probably one of, if not the leading enterprise solutions out there. It supports a variety of application protocols and can simulate many many simultaneous users. I also understand that some of the simulation scripts in LoadRunner can be used by other MicroFocus tools to monitor application performance when live.
However, my main point remains. You need to understand performance at all levels of service delivery not just the application code. This may mean instrumenting code, network probes, database analytics, server performance monitoring and so forth to get a complete picture.
This is where cost comes to bear.
Instrumenting every layer of service delivery is a complex process resulting in massive amounts of data collection and processing. The overhead of managing this data collection can be prohibitive. And this is where I learned a lesson from an old boss who told me.....
"We can instrument the application, we can put network probes everywhere, we can setup monitoring tools. It'll take us a year, cost us hundreds of thousands of pounds to find out where the problem really is. We've got an application we've written with 120 users that runs like a sloth on sleeping tablets. I've ordered a new server with faster CPU's, SSD disks and more and faster memory. It's arriving in 2 weeks and costing me £15k".
I replied that he's simply sweeping the problem under the carpet and not fixing the root cause of the issue.
His reply to me was simply "I know. I'm simply making an inefficient system be inefficient faster. And by doing that, I'm saving £200k and improving things inside a month."
My point here: his priority was to get 120 people doing their given units of work faster and as quickly as possible which was different to the "techie" need to define efficiency in terms of maximm throughput for least usage of resources.
Performance testing in time of development is very important, performance monitoring in live operation is critical. The solutions to problems though don't always need to be more testing.
What are performance testing tools? Before an application can be deployed, it should ideally be tested under different operating conditions to make sure it can perform as expected. To do this, software testing professionals use performance testing tools (sometimes just called “testing tools”) to isolate and identify potential client, network, and server bottlenecks that might affect how an application will behave in production.
Some performance test products are commercial....
After researching various performance testing tools, I can confidently say that the top two are Apache JMeter and Micro Focus LoadRunner.
For starters, Apache JMetersupports a 100% Java-scripted desktop application as well as testing functional performance of web applications. I think it's one of the best free load-testing tools for developers, and that is probably why it is so popular. What’s nice about it is that it is available in an open-source format, which makes it highly accessible to software businesses of all sizes. And despite being open-source, it is extremely versatile.
Some of the solution’s greatest aspects are that it has multiple load injectors handled by a single controller, it does not require load testing infrastructure, and it has a user-friendly UI which means it requires less programming than other API performance testing tools. Whatsmore is, it supports all Java apps and it is compatible with several web and networking protocols such as HTTP, HTTPS, FTP, LDAP, SOAP, and TCP. It also offers compliance with JDBC and message-oriented middleware (MOM) through JMS because it has a Java-oriented system.
In addition, it has simple graphs and charts that provide vital load-based statistics and resource utilization monitoring. And the option of using shell scripts and native commands for testing procedures makes it easier to implement.
Apache JMeter also allows you to test applications for both dynamic and static resources. Therefore, users can utilize resources such as servers, logs, queries, scripts, and files during testing. Testers can simultaneously also inspect applications under heavy load and evaluate their performance and robustness against different load types.
The solution has a lot to offer. Some of the features that I have found to be most valuable include:
I also strongly recommend Micro Focus LoadRunner, which I consider to be a very sophisticated solution. One of its most powerful features is that it can create and handle thousands of users at the same time. The solution enables you to gather all the required information with respect to performance and is also based on infrastructure.
What’s good about Micro Focus LoadRunner is that it enables software testers to have complete visibility over their system’s end-to-end performance. It ]specializes in detecting bottlenecks before the application enters the implementation phase so users can evaluate each component individually before it actually goes live.
I have found it to be helpful in detecting performance gaps before the implementation or upgrade of a new system. It provides users with highly advanced predictive features to forecast expenses for up-scaling application capacity. Due to accurate predictions of expenses related to hardware and software, it is easier to improve the capacity and scalability of your application.
Some of its biggest advantages are:
In summary, both Apache JMeter and Micro Focus LoadRunner are very good performance testing tools. No matter which one you choose, you will not be disappointed.
@Janet Staver Agree Janet
@David Collier Yes, it's subset of definition, not full as the list prolongs.
Failover, fault tolerance and the high availability, resilience, chaos, etc. list continues...
@David Collier performance testing is basically to emulate your hardware and simulate your software needs as per your need considering various parameters like user loads (both concurrent/ simultaneous), times, geo locations, internal (as well as external) loads, protocols, policies, measurements at all 7 OSI layers, level of proactive monitoring and alerts to various thresholds, SLA/ola s, scalability, capacity and budget available.
@Ravi Suvvari I think this is a perfect example of understanding "what we mean by performance testing" - I accept in principle what you are saying. But I cannot agree that "performance testing" is "emulating your hardware" and "simulating your software" as the assumption is that you are approximating Performance Testing to some arbitrary preconceived nirvana of what you hope performance will be.
What about all the other types of testing that can be thought of as subsets of performance testing such as isolation testing, failover testing and many others?
Sorry Ravvi, I can't totally agree with your definition - it's one definition, but not THE definition.
As I said it's about understanding what performance testing means in the context of the questionner and it means many different things to different people.
Dave
Hi all,
I'm afraid the question is rather too generic, but I'll try and provide some pointers.
First we need to understand what you mean by "performance testing". To a network manager, they are interested in how quickly, efficiently and securely a packet of data travels from point A to point B on a network. To an application manager, they are interested in how quickly, efficiently and securely a person can complete a unit of work using a given application. To a DBA, it's how quickly, efficiently and securely data can be accessed and stored. To a server manager they're interested in how quickly, efficiently and securely data can be processed.
Possibly more relevant in this instance is the needs of the application developer / application tester. To a developer, they are interested in understanding how their application will perform under various loads. An application is likely to perform differently under different loads. For instance, a web application with 2 users is going to behave differently than if it has 2 million users. In fact, I'd go as far as to say that understanding your potential user base should influence server, network, database design in the first place.
Effectively, they all want to optimise speed, efficiency and security - and also COST in order to deliver the best outcome for the business they are supporting.
From this common set of requirements their needs diverge rapidly into a massive variety of metrics that need to be collected.
As you can see it's a VAST area for discussion. With such a variety of requirements it's probably impossible to name a single product that can meet ALL requirements of ALL interested parties.
This very site has a good report https://www.peerspot.com/landi...
that outlines application performance testing solutions. These range from free, open source solutions all the way through to enterprise class solutions.
As an ex-Micro Focus employee, I can only comment on LoadRunner. It's probably one of, if not the leading enterprise solutions out there. It supports a variety of application protocols and can simulate many many simultaneous users. I also understand that some of the simulation scripts in LoadRunner can be used by other MicroFocus tools to monitor application performance when live.
However, my main point remains. You need to understand performance at all levels of service delivery not just the application code. This may mean instrumenting code, network probes, database analytics, server performance monitoring and so forth to get a complete picture.
This is where cost comes to bear.
Instrumenting every layer of service delivery is a complex process resulting in massive amounts of data collection and processing. The overhead of managing this data collection can be prohibitive. And this is where I learned a lesson from an old boss who told me.....
"We can instrument the application, we can put network probes everywhere, we can setup monitoring tools. It'll take us a year, cost us hundreds of thousands of pounds to find out where the problem really is. We've got an application we've written with 120 users that runs like a sloth on sleeping tablets. I've ordered a new server with faster CPU's, SSD disks and more and faster memory. It's arriving in 2 weeks and costing me £15k".
I replied that he's simply sweeping the problem under the carpet and not fixing the root cause of the issue.
His reply to me was simply "I know. I'm simply making an inefficient system be inefficient faster. And by doing that, I'm saving £200k and improving things inside a month."
My point here: his priority was to get 120 people doing their given units of work faster and as quickly as possible which was different to the "techie" need to define efficiency in terms of maximm throughput for least usage of resources.
Performance testing in time of development is very important, performance monitoring in live operation is critical. The solutions to problems though don't always need to be more testing.
Just my 50p worth.
Dave..
Hi @KashifJamil, @Ravi Suvvari , @David Collier and @reviewer1322280,
Can you please share your professional opinion?
Thanks.
@Evgeny Belenky Done, Thanx for giving this valuable opportunity