The user interface has a somewhat outdated design, which is certainly an area that could be improved. Some of the modules appear to be loosely connected, but despite these aspects, our overall experience with the tool was positive. When you begin integrating your testing tools with qTest, the available examples may not be very clear, and I believe this is an area that could be enhanced, particularly in terms of providing clearer integration guidance. While the tool's integration with various testing tools is impressive, there is room for improvement in showcasing more cases and benefits, especially through additional videos and documentation.
Our company still hasn't found a way to mark in Tricentis qTest using APIs when our automation strips run, which is a process that, if introduced in the future, can probably improve the tool. The aforementioned process happens in other tools like Azure DevOps, Zephyr, or Xray but not in Tricentis qTest. In the future, an automated test suite, triggering functionality in the test suite, and execution of automated tests, which could be marked in Tricentis qTest, would be a great addition to the tool. Tricentis qTest's technical support team needs to improve its ability to respond to queries from users.
The support for Tricentis qTest has room for improvement. The response could be better. There's a feature I want to document on the Tricentis Idea Portal for Tricentis qTest, which I hope to see in the next version of the tool. It's a feature available in Micro Focus where you execute a test, and then on a spec level, you mark it as pass or fail. Then at the overall level, Micro Focus will automatically mark the test as a pass if all steps passed or failed, even if one step failed. However, here in Tricentis qTest, you still need to mark the overall level of the test cases. It's not automated, unlike what you have in Micro Focus. If Tricentis adds that feature in Tricentis qTest, it will make life easier for testers.
Sr. Product Manager - Intelligent Automation & RPA at a financial services firm with 5,001-10,000 employees
Real User
2021-11-02T00:07:20Z
Nov 2, 2021
I'd like to see better integration in the platform so that there is a testing automation continuum, where customers can easily mature through qTest and Tosca functionalities.
Senior Architect at a manufacturing company with 1,001-5,000 employees
Real User
2019-11-28T06:29:00Z
Nov 28, 2019
The information that qTest provides to executives could be better. If there are tests that have a lot of steps in them, people will go through and do seven out of eight steps, but it doesn't show the test is complete. So from a metrics perspective, what executives normally see is that it looks like nothing was done, even though they did seven out of the eight steps. In addition, you can add what I believe are called suites and modules. I opened a ticket on this as to what's the difference. And it seems there's very little difference. In some places, the documentation says there's no difference. You just use them to organize how you want. But they're not quite the same because there are some options you can do under one and not the other. That gets confusing. But since they are very close to the same, people use them differently and that creates a lack of consistency. My preference would be that qTest establish the way they do it and everybody has to do it that way, so everything is done the same way. In response to my ticket, they said that they are the same and that you can choose whichever one to best organize how you want to organize. But the problem is that everybody in the organization makes a different choice. And they sent me a link to the documentation. Some of the documentation does say that there are some differences. There was one thing, like importing tests or something, that we could do under one but not under the other. That really made it a mess. That's the only really big concern I have had.
As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users. There are more improvements, which can be made, such as giving users an easier way to access the tool.
The installation of the software could be streamlined. We pay for the on-premise support and they help us a lot, but the installation is something which is very command-line oriented. I don't know if there's a way to get the installation process to be less command-line oriented, but I would like to see that happen.
We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot. We have some concerns and we have some challenges as we try to work with those features. This is an area where, if we see more improvements, we will be very happy. We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy. Currently, we have some challenges and complexities around that. Locating JIRA defects corresponding to a trait from the test results is something of a challenge. It impacts productivity. The reason is that the team spends more time on mapping it again for new execution failures. If that is taken care of, it will actually save a lot of QA effort. I'm not sure if someone is working on that. We had raised this point during our evaluation, so it was probably discussed at some point in time, that they will get at it, but we don't have a clear version by which it will be taken up. Also, Insights is not that easy to use for someone who has just started working with qTest. You need to know what all the fields are and have some background on Insights. It's not that user-friendly for someone who's just starting to work with it. People should be trained so they know what all the various features are inside it. Then people will be able to appreciate it.
Manager, IT Quality Assurance (EDM/ITSRC/Infrastructure) at a financial services firm with 1,001-5,000 employees
Real User
2019-10-30T06:14:00Z
Oct 30, 2019
We are starting to use qTest Insights a little bit. Right now, on a scale of one to five, I would say the Insights reporting engine is a three because we are facing some performance issues. For example, qTest offers a baseline feature where you can only base sort-order for a specific story or requirement on two fields. However, our company has so many criteria and has so many verticals that this baseline feature is not sufficient. We would want another field to be available in the sort order. When tickets come over from JIRA, it would be helpful to be able to sort by sprint, to begin with. And within the sprint there are labels or subcategories. Currently, it allows us to only sort on a sprint and then subcategory. We would like to see things bucketed or placed in a folder with a status within the subcategory. We need three fields instead of two. When we raised this item, Tricentis said that it's a feature request. Also, the features that are customizable or specific for a team are still not available in Insights reporting. We have submitted approximately 15 or 20 tickets to Tricentis so far to address those features/enhancements/bugs. That's all in the works. Another important issue is that we can export from JIRA. When we export, any attachments in JIRA will be part of the export. Although qTest is just plugging into the test cases, it's not letting us export attachments from JIRA. Everybody else in the company who is operating in JIRA would like to see the attachments that come out of the integration links, meaning the test cases. That is actually a feature that has to be set by Tricentis and not JIRA. Again, that required a new feature request to be submitted.
Assistant Vice President, IT Quality Assurance at Guardian Life Insurance
Real User
2019-10-30T06:14:00Z
Oct 30, 2019
They're coming out with a new feature now, an analytics module. I, personally, slide more toward the metrics/analytics side of things, so anything they can do to come up with a reliable template where I can look at all of my metrics at a project-, quarter-, or enterprise-level, would be fantastic. So that's my "nirvana" goal. I don't want to have to go to Tableau. I have a lot of hopes in their analytics module. And I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me. We have between 150 and 200 users who are all QA. Project managers might cycle in sometimes for metrics, but we publish our metrics. You can embed scripts that come out of Insights, which is a really great feature. It's a feature I would really like to see them work on more, to make sure their APIs are bi-directional or timely. It's a little unclear if they refresh at a certain point in time or when I click it. That is one area that is a little murky.
The Insights reporting engine is a little challenging to use in terms of setting up a report and getting the data. It took me a while to understand how to use the tool. I'm mainly extracting the data out of the tool. I'm not necessarily using any of the dashboards in the tool. There are some fields that I did not make site-specific because I had to get things up and running quickly. The fields are in both the Test Run area and Defects. If you do a project via site-specific, you can't get any of those fields out of Insights. That's a limitation that they need to figure out. They shouldn't have that limitation on the tool. In addition, I really can't stand the Defects module. It's not easy to use. ALM Micro Focus used to be called QC. That solution's Defects Module is really robust. For example, let's say you have a defect and you have a query. You can actually walk through each defect by just clicking an arrow. You go through that defect, add your updates, click the "next" arrow, and walk down through them. But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual. By contrast, qTest's test design and test execution modules are very robust. They just missed the boat on the Defects module. From what I've heard and from what I can understand, other people are using JIRA or something else to do their defects tracking, and we're not. I needed a tool to do everything. That's their weakest link.
We have used the Insights reporting engine but, within the last six months or so, since Tricentis took it over, they've started to improve that. We had some custom fields to match our process dates, and to track who is the project manager of the release, and who the test coordinator is. That way, we can keep track of what kind of testing is being done for that particular project. The Insights engine would not show us any of the custom fields when we first started using it. I've been working with them to improve that factor for Insights. The next phase is that by the end of the year, they're supposed to release a new analytical tool within Insights or change Insights to be that analytics tool. I'm looking forward to that because I do all my analytics with exports from qTest and exports from our ITSM/ITIL system, Cherwell. I then make my reports out of them, so it will be very welcome to have that functionality. I do some reporting for executives and business users from qTest. I go to Insights, do a query on the fields I want them to see, and then export that into Excel. I get the graphs, and then do a screen print, put it into a report, and send it off in a PowerPoint presentation. The quality of that data needs help. I use it fairly regularly for defect reporting because it does show an excellent view of the defects that are associated with the project and whether they're open or closed—looking forward to the new Analysis tool that is coming to Cloud customers soon. Reporting shouldn't be so difficult. I shouldn't have to write so many queries to get the data I'm looking for, for a set of metrics about how many releases we had. I still have to break those spreadsheets out of there to get the data I need. Also, qTest doesn't have any workflow engine. The only one they have a workflow engine for is the defects. I'd like to see more of something of that nature. It might help improve efficiency as we move into the future, especially when automation comes in.
Sr. Manager Quality Assurance at Forcepoint LLC (Formerly Raytheon|Websense)
Real User
2019-10-24T04:52:00Z
Oct 24, 2019
I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that. It would be nice. It has good features, but as soon as we started using qTest, Insights became unusable. I do know that they're planning on replacing it next month. It's the one bad side of the application and they're replacing it, so at least they're listening to their customers. They know when they've got a problem, so that's a good thing. In addition, within Insights, the report creation could be more versatile and intuitive. Generally, the reporting tools could be made more streamlined and easier to access by people outside of the organization. If I have one complaint about qTest, it's its reporting. Again, that is something that's being replaced here soon, so it'll be an invalid point within a month. It has already been fixed in the on-premises version. The hosted version has yet to have the replacement. I don't know what the replacement's going to be like. I haven't used it so I can't really judge it.
Senior Director of Quality Engineering at Cheetah Digital
Real User
2019-10-24T04:52:00Z
Oct 24, 2019
The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good, compared to other test tracking or test management tools. But the execution is a little bit limited. The overall solution is good, but the results are not consistent. The basic premise and functionality work fine. When you try to extend the use of it a little bit more, it struggles. It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique. They are currently working on a new flavor for Insights. We do have dashboards and links set up that our executive level access. Overall, the numbers are accurate, based on what we're putting into it, but where we lose integrity or where we lose the overall perception of things, is when the colors start changing or when red is used to mean good. That's when executives lose respect for it. We've used it as a dashboard during key deployments. And then, as press is being made and the reports are being updated, colors start to change and that distracts from the overall intent of the reporting progress. We chose to leverage Insights so that we didn't have to manually create charts via either a Google Sheet or Excel since we don't have the resources, time, or bandwidth to do that. That is what excited about Insights. But then, it just didn't meet our expectations. We have voiced our concerns to Tricentis and they definitely have empathy. We talk about it and they keep us updated. With an acquisition they're going to leverage their analytics tool. We are excited about that, once it launches. We have also discussed with our account manager a couple of possible enhancements here and there, but nothing that's critical or major. One example is when you're trying to link test cases to requirements, a lot of time there is duplication between the two. Sometimes you want to tie in some of the same test cases to the same requirements. An enhancement would be a quick way to copy that over directly without having to manually link every single one again. We have some instances where a large chunk of test cases are tied, re-used, and similar. When you get upwards of 15 or 20, to limit some of the tediousness of doing them all manually, if you could take a copy of the links from one and switch them over to another, that would be helpful. It's not of major concern. It would just be nice as a quick way to do it. Another example is that with the charts — and again, great intention — you can put in a date range and apply it. Then you get to another screen and come back. After updating several charts, the date range is gone again. You have to go back in and it's sometimes two to three times before that date range is saved.
Tricentis is the global leader in enterprise continuous testing, widely credited for reinventing software testing for DevOps, cloud, and enterprise applications. The Tricentis AI-based, continuous testing platform provides a new and fundamentally different way to perform software testing. An approach that’s totally automated, fully codeless, and intelligently driven by AI. It addresses both agile development and complex enterprise apps, enabling enterprises to accelerate their digital...
The user interface has a somewhat outdated design, which is certainly an area that could be improved. Some of the modules appear to be loosely connected, but despite these aspects, our overall experience with the tool was positive. When you begin integrating your testing tools with qTest, the available examples may not be very clear, and I believe this is an area that could be enhanced, particularly in terms of providing clearer integration guidance. While the tool's integration with various testing tools is impressive, there is room for improvement in showcasing more cases and benefits, especially through additional videos and documentation.
Our company still hasn't found a way to mark in Tricentis qTest using APIs when our automation strips run, which is a process that, if introduced in the future, can probably improve the tool. The aforementioned process happens in other tools like Azure DevOps, Zephyr, or Xray but not in Tricentis qTest. In the future, an automated test suite, triggering functionality in the test suite, and execution of automated tests, which could be marked in Tricentis qTest, would be a great addition to the tool. Tricentis qTest's technical support team needs to improve its ability to respond to queries from users.
The support for Tricentis qTest has room for improvement. The response could be better. There's a feature I want to document on the Tricentis Idea Portal for Tricentis qTest, which I hope to see in the next version of the tool. It's a feature available in Micro Focus where you execute a test, and then on a spec level, you mark it as pass or fail. Then at the overall level, Micro Focus will automatically mark the test as a pass if all steps passed or failed, even if one step failed. However, here in Tricentis qTest, you still need to mark the overall level of the test cases. It's not automated, unlike what you have in Micro Focus. If Tricentis adds that feature in Tricentis qTest, it will make life easier for testers.
I'd like to see better integration in the platform so that there is a testing automation continuum, where customers can easily mature through qTest and Tosca functionalities.
The information that qTest provides to executives could be better. If there are tests that have a lot of steps in them, people will go through and do seven out of eight steps, but it doesn't show the test is complete. So from a metrics perspective, what executives normally see is that it looks like nothing was done, even though they did seven out of the eight steps. In addition, you can add what I believe are called suites and modules. I opened a ticket on this as to what's the difference. And it seems there's very little difference. In some places, the documentation says there's no difference. You just use them to organize how you want. But they're not quite the same because there are some options you can do under one and not the other. That gets confusing. But since they are very close to the same, people use them differently and that creates a lack of consistency. My preference would be that qTest establish the way they do it and everybody has to do it that way, so everything is done the same way. In response to my ticket, they said that they are the same and that you can choose whichever one to best organize how you want to organize. But the problem is that everybody in the organization makes a different choice. And they sent me a link to the documentation. Some of the documentation does say that there are some differences. There was one thing, like importing tests or something, that we could do under one but not under the other. That really made it a mess. That's the only really big concern I have had.
As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users. There are more improvements, which can be made, such as giving users an easier way to access the tool.
The installation of the software could be streamlined. We pay for the on-premise support and they help us a lot, but the installation is something which is very command-line oriented. I don't know if there's a way to get the installation process to be less command-line oriented, but I would like to see that happen.
We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot. We have some concerns and we have some challenges as we try to work with those features. This is an area where, if we see more improvements, we will be very happy. We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy. Currently, we have some challenges and complexities around that. Locating JIRA defects corresponding to a trait from the test results is something of a challenge. It impacts productivity. The reason is that the team spends more time on mapping it again for new execution failures. If that is taken care of, it will actually save a lot of QA effort. I'm not sure if someone is working on that. We had raised this point during our evaluation, so it was probably discussed at some point in time, that they will get at it, but we don't have a clear version by which it will be taken up. Also, Insights is not that easy to use for someone who has just started working with qTest. You need to know what all the fields are and have some background on Insights. It's not that user-friendly for someone who's just starting to work with it. People should be trained so they know what all the various features are inside it. Then people will be able to appreciate it.
We are starting to use qTest Insights a little bit. Right now, on a scale of one to five, I would say the Insights reporting engine is a three because we are facing some performance issues. For example, qTest offers a baseline feature where you can only base sort-order for a specific story or requirement on two fields. However, our company has so many criteria and has so many verticals that this baseline feature is not sufficient. We would want another field to be available in the sort order. When tickets come over from JIRA, it would be helpful to be able to sort by sprint, to begin with. And within the sprint there are labels or subcategories. Currently, it allows us to only sort on a sprint and then subcategory. We would like to see things bucketed or placed in a folder with a status within the subcategory. We need three fields instead of two. When we raised this item, Tricentis said that it's a feature request. Also, the features that are customizable or specific for a team are still not available in Insights reporting. We have submitted approximately 15 or 20 tickets to Tricentis so far to address those features/enhancements/bugs. That's all in the works. Another important issue is that we can export from JIRA. When we export, any attachments in JIRA will be part of the export. Although qTest is just plugging into the test cases, it's not letting us export attachments from JIRA. Everybody else in the company who is operating in JIRA would like to see the attachments that come out of the integration links, meaning the test cases. That is actually a feature that has to be set by Tricentis and not JIRA. Again, that required a new feature request to be submitted.
They're coming out with a new feature now, an analytics module. I, personally, slide more toward the metrics/analytics side of things, so anything they can do to come up with a reliable template where I can look at all of my metrics at a project-, quarter-, or enterprise-level, would be fantastic. So that's my "nirvana" goal. I don't want to have to go to Tableau. I have a lot of hopes in their analytics module. And I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me. We have between 150 and 200 users who are all QA. Project managers might cycle in sometimes for metrics, but we publish our metrics. You can embed scripts that come out of Insights, which is a really great feature. It's a feature I would really like to see them work on more, to make sure their APIs are bi-directional or timely. It's a little unclear if they refresh at a certain point in time or when I click it. That is one area that is a little murky.
The Insights reporting engine is a little challenging to use in terms of setting up a report and getting the data. It took me a while to understand how to use the tool. I'm mainly extracting the data out of the tool. I'm not necessarily using any of the dashboards in the tool. There are some fields that I did not make site-specific because I had to get things up and running quickly. The fields are in both the Test Run area and Defects. If you do a project via site-specific, you can't get any of those fields out of Insights. That's a limitation that they need to figure out. They shouldn't have that limitation on the tool. In addition, I really can't stand the Defects module. It's not easy to use. ALM Micro Focus used to be called QC. That solution's Defects Module is really robust. For example, let's say you have a defect and you have a query. You can actually walk through each defect by just clicking an arrow. You go through that defect, add your updates, click the "next" arrow, and walk down through them. But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual. By contrast, qTest's test design and test execution modules are very robust. They just missed the boat on the Defects module. From what I've heard and from what I can understand, other people are using JIRA or something else to do their defects tracking, and we're not. I needed a tool to do everything. That's their weakest link.
We have used the Insights reporting engine but, within the last six months or so, since Tricentis took it over, they've started to improve that. We had some custom fields to match our process dates, and to track who is the project manager of the release, and who the test coordinator is. That way, we can keep track of what kind of testing is being done for that particular project. The Insights engine would not show us any of the custom fields when we first started using it. I've been working with them to improve that factor for Insights. The next phase is that by the end of the year, they're supposed to release a new analytical tool within Insights or change Insights to be that analytics tool. I'm looking forward to that because I do all my analytics with exports from qTest and exports from our ITSM/ITIL system, Cherwell. I then make my reports out of them, so it will be very welcome to have that functionality. I do some reporting for executives and business users from qTest. I go to Insights, do a query on the fields I want them to see, and then export that into Excel. I get the graphs, and then do a screen print, put it into a report, and send it off in a PowerPoint presentation. The quality of that data needs help. I use it fairly regularly for defect reporting because it does show an excellent view of the defects that are associated with the project and whether they're open or closed—looking forward to the new Analysis tool that is coming to Cloud customers soon. Reporting shouldn't be so difficult. I shouldn't have to write so many queries to get the data I'm looking for, for a set of metrics about how many releases we had. I still have to break those spreadsheets out of there to get the data I need. Also, qTest doesn't have any workflow engine. The only one they have a workflow engine for is the defects. I'd like to see more of something of that nature. It might help improve efficiency as we move into the future, especially when automation comes in.
I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that. It would be nice. It has good features, but as soon as we started using qTest, Insights became unusable. I do know that they're planning on replacing it next month. It's the one bad side of the application and they're replacing it, so at least they're listening to their customers. They know when they've got a problem, so that's a good thing. In addition, within Insights, the report creation could be more versatile and intuitive. Generally, the reporting tools could be made more streamlined and easier to access by people outside of the organization. If I have one complaint about qTest, it's its reporting. Again, that is something that's being replaced here soon, so it'll be an invalid point within a month. It has already been fixed in the on-premises version. The hosted version has yet to have the replacement. I don't know what the replacement's going to be like. I haven't used it so I can't really judge it.
The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good, compared to other test tracking or test management tools. But the execution is a little bit limited. The overall solution is good, but the results are not consistent. The basic premise and functionality work fine. When you try to extend the use of it a little bit more, it struggles. It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique. They are currently working on a new flavor for Insights. We do have dashboards and links set up that our executive level access. Overall, the numbers are accurate, based on what we're putting into it, but where we lose integrity or where we lose the overall perception of things, is when the colors start changing or when red is used to mean good. That's when executives lose respect for it. We've used it as a dashboard during key deployments. And then, as press is being made and the reports are being updated, colors start to change and that distracts from the overall intent of the reporting progress. We chose to leverage Insights so that we didn't have to manually create charts via either a Google Sheet or Excel since we don't have the resources, time, or bandwidth to do that. That is what excited about Insights. But then, it just didn't meet our expectations. We have voiced our concerns to Tricentis and they definitely have empathy. We talk about it and they keep us updated. With an acquisition they're going to leverage their analytics tool. We are excited about that, once it launches. We have also discussed with our account manager a couple of possible enhancements here and there, but nothing that's critical or major. One example is when you're trying to link test cases to requirements, a lot of time there is duplication between the two. Sometimes you want to tie in some of the same test cases to the same requirements. An enhancement would be a quick way to copy that over directly without having to manually link every single one again. We have some instances where a large chunk of test cases are tied, re-used, and similar. When you get upwards of 15 or 20, to limit some of the tediousness of doing them all manually, if you could take a copy of the links from one and switch them over to another, that would be helpful. It's not of major concern. It would just be nice as a quick way to do it. Another example is that with the charts — and again, great intention — you can put in a date range and apply it. Then you get to another screen and come back. After updating several charts, the date range is gone again. You have to go back in and it's sometimes two to three times before that date range is saved.