Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The more that you can put in controls and keep your data clean, the better off you are. There are some laws coming into effect in the next year or so that are going to really scrutinize production data being in the lower environments.
AVP Quality Assurance at GM Financial
Video Review
Gives you confidence in data that you're creating and keeps you out of the SOX arena, because there's no production data within that environment.
What is most valuable?
How has it helped my organization?
We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car, etc. We're in the car, auto loan business.
What needs improvement?
I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.
What do I think about the stability of the solution?
TDM has been around for a couple of years. I used it at my previous company, as well. It's been really stable. It's a tool that probably doesn't get utilized fully. We intend on taking that, partnering it with the SV solution and being able to generate the data for the service virtualization aspect.
Buyer's Guide
Broadcom Test Data Manager
December 2024
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.
What do I think about the scalability of the solution?
Scalability is similar along the SV lines; it's relatively easy to scale. It's a matter of how you want to set up your data distribution.
How are customer service and support?
We were very pleased with the technical support.
Which solution did I use previously and why did I switch?
When you have to generate the amount of loan volume that we need – 50 states, various tax laws, etc. – I needed a solution that I can produce quality data that fits the target testing we need; any extra test cases; etc. We’re more concentrated on being very succinct in the delivery and the time frame that we need to get the testing done in.
I used CA in my previous company. I have prior working relationship with them.
How was the initial setup?
The initial setup was done internally. Obviously, the instructions that were online when we downloaded it, we were able to follow those and get the installation done. We did have a couple of calls into the technical solution support area and they were able to resolve it fairly quick.
What other advice do I have?
I think from my synthetic generation, a lot of times generating synthetic data can be cumbersome. TDM, with some of the rules aspect of it, you can generate it and have your rules in place that you know your data's going to be very consistent. When we want a particular loan to come through with a particular credit score, we can generate the data. We can select and generate the data out of TDM that will create me a data file for my in-front script, through using DevTest.
I also push the service virtualization record to respond to the request of the loan, hitting the credit bureau, returning a certain credit score, which then gets us within that target zone for that loan we're looking for, to trigger a rule.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Practice Manager (Testing Services) at a financial services firm with 1,001-5,000 employees
Video Review
Includes basic services which allow you to mask data and create synthetic data. It also includes test matching which accelerates test cycles and allows automation to happen.
What is most valuable?
You've got the basic services of the TDM tool which allows you to mask data, it allows you to create synthetic data, but I think what really sets TDM apart from the other competitors, is the kind of the added extras that you get with doing true test data management, so you've got things like the cubing concepts that are grid tools, or data maker, kind of really brings to bear within test data management teams. You've also got test matching as well which massively accelerates test cycles and really gives stability and allows automation to happen.
How has it helped my organization?
We've got a centralized COE in terms of test data management within our organization, benefits that are really three fold in terms of cost, quality and time to market. In terms of the quality we through test data management where, data is kind of the glue that holds systems together and therefore, if I understand my test data, I understand what I'm testing. Through the tooling and kind of the maturity in the tooling we're really bringing an added quality aspect in terms of what we test and how we test, and the risk based testing that we might approach.
In terms of the speed to market, because we don't manually produce data anymore, we use intelligent profiling techniques, test data matching, we massively reduce the time we spend finding data, and we also can produce data on the fly, which turns around test data cycles. In terms of cost, because we're doing it a lot quicker, it's a lot cheaper.
We have a centralized test data management team that caters for all development within my organization. We've created an organization that is so much more effective and optimized in terms of the kind of the time to get to test execution, to identify data and get into execution in the right way.
What needs improvement?
I think the kind of the big area for exploitation for us is already a feature that already exists within the tool. The TCO element is something massive, I talked earlier on about the kind of the maturity and the structure that it gives you to testing. I think this is kind of a game changer in terms of articulating impact of change and no project goes swimmingly first time and therefore the ability to impact a test through kind of a making simple process changes is a massive benefit.
What do I think about the stability of the solution?
The stability of the solution is really fine. I think the really big question is the stability of underlying system that it's trying to manipulate and the tool is the tool, it does what it needs to do.
What do I think about the scalability of the solution?
Within our organization we have many, many platforms, many, many different technologies. One of the interesting challenges we always have is in terms of, especially when we're doing performance testing, can we get the kind of the volumes of data in sufficient times, and we use things like data explosion quite often and it does what it needs to do and it does it very quickly.
How are customer service and technical support?
We work in an organization where we use many tools from many different suppliers. I think that the kind of a relationship that my organization has with CA is kind of a much richer one in terms of, you know, it's not just a tool support.
Which solution did I use previously and why did I switch?
Originally we used to spend probably, hours and hours and hours of spreadsheet time manually creating, keying data, massively inefficient, massively error prone, and clearly as part of a financial institution we need to conform to regulations. Therefore we needed an enterprise solution to make sure that we could actually deliver a regulatory data, test data, to suit our projects.
The initial driver with kind of really buying any tooling initially is kind of what's the problem statement, what's the driver to get these things in? I think once you realize that there is so much more than just the regulatory bit, as I say, the time, cost, quality aspect that it can actually give to test, that's really the kind of the bigger benefit than just regulatory.
How was the initial setup?
We've had the tool for about four or five years now within the organization. As you might expect we first got the guys in not knowing anything about the tool and not really knowing how to deploy it, therefore what we needed to do was we called on the CA guys to come in and really to show us how the tool works, but also how to manipulate that within our organization. We had a problem case that we wanted to address, we used that as the proving item, and that's really where we started our journey in terms of a dedicated test data management function.
Which other solutions did I evaluate?
Important evaluation criteria: to be honest it's got to be around what does the tool do? A lot of the tools on the market do the same thing, whether there are things that differentiate those tools, and what's really the organization's problem statement they're trying to fulfill. Once you've got the tool, that's great, but you need the people and process, and without that, it comes back to the relationship that you have with the CA guys, you've just got to shelfwear and a tool. We went through a proper RFP selection process where we kind of set our criteria and we kind of we invited a few of the kind of the vendors into come and demonstrate what they could do for us and picked the one that was best suited to us.
What other advice do I have?
Rating: no one's perfect. You got to go in the top quartile of it, so you're probably eight upwards. I think in terms of test data management solutions, it's the best out there. I think that the way that tool is going it's kind of moving into other areas in TCO and kind of the integration with SV, it's a massive thing for us.
I think the recommendation is that absolutely this is kind of the best in breed. As well as buying the tool, it would be a mistake to not also invest in kind of understanding how the tool integrates into the organization and kind of how to bring that into the kind of the tools team, the testing teams and the environment teams that you need to work with.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Broadcom Test Data Manager
December 2024
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.
Solutions Architect at American Express
Allows me to generate and manage synthetic data, but the interface could be better
Pros and Cons
- "It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results."
What is our primary use case?
Generate synthetic test data.
It has performed fine. It provides us the capabilities that we were anticipating.
How has it helped my organization?
It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results. We can automate that process. Plus, we're no longer using production data.
What is most valuable?
- I am able to maintain metadata information based off of the structures and
- I am able to generate and manage synthetic data from those.
What needs improvement?
The interface based, on our unique test case - because we are extremely unique platform - could be better. We have to do multiple steps just to create a single output. We understand that, because we are a niche architecture, it's not high on their list, but eventually we're hoping it becomes integrated and seamless.
As noted in my answer on "initial setup", I would like to see that I don't have to do three steps, rather that it's all integrated into one. Plus, I'd like to know more about their API, because I want to be able actually call it directly using an API, and pass in specific information so that I can tune the results to my specific needs for that test case. And actually make it to where I can do it for multiple messages in one call.
What do I think about the stability of the solution?
Stability is fine. It's stable. It's not like it crashes or anything like that, because it's just a utility that we use to generate data. Once we generate the data, we capture it and maintain it. We don't use the tool to continually generate data, we only generate it for the specific test case, and then don't generate it again. But it gives us the ability to handle all the various combinations of variables, that's the big part.
What do I think about the scalability of the solution?
For our platform, scalability probably isn't really an issue. We're not planning on using it the way it was intended because we're not going to use it for continually generating more data. We want to only generate specific output that we will then maintain separately and reuse. So, the only time we will generate anything is anytime there is a different test case needed, a different condition that we need to be able to create. So, scalability is not issue.
How are customer service and technical support?
Tech support is great. We've had a couple of in-house training sessions. It's coming along fine. We're at a point now where were trying to leverage some other tools, like Agile Designer, to start managing the knowledge we're starting to capture, so that we can then begin automating the construction of this component with Agile Designer as well.
Which solution did I use previously and why did I switch?
We didn't have a previous solution.
How was the initial setup?
The truth is that I was involved in setup but they didn't listen to me. "They" are other people in the company I work for. It wasn't CA that did anything right or wrong, it was that the people that decided how to set it up didn't understand. So we're struggling with that, and we will probably transition over. Right now we have it installed on laptops, and it shouldn't be. It should be server based. We should have a central point where we can maintain everything.
So, the set up is fairly straightforward, except for the fact that there are three steps that we have to go through. We have to do a pre-setup, a pre-process, then we can do our generation of our information, and then there's a post-process that we have to perform, only because of the unique characteristics of our platform.
Which other solutions did I evaluate?
In addition to CA Test Data Manager, we evaluated IBM InfoSphere Optim. Those were the two products that were available to our company at the time when I proposed the idea of using it in this way.
We chose CA because they had the capability of doing relationship mapping between data variables.
What other advice do I have?
The most important criterion when selecting a vendor is support. And obviously it comes down to: Do they offer the capabilities I'm interested in at a reasonable price, with good support.
I rate it at seven out of 10 because of those three steps I have to go through. If they get rid of those, make it one step, and do these other things, I'd give it a solid nine. Nothing's perfect.
For my use, based on the products out there that I have researched, this is the best one.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
QA Director at Sogeti UK
We are able to create test data for specific business case scenarios; it's user-friendly
Pros and Cons
- "The most valuable feature is the Portal that comes with the tool. That helps make it look much more user-friendly for the users. Also its ease of use - even for developers it's not that complicated."
- "They should make the Portal a little more user-friendly, make it even easier to configure things directly from the Portal."
- "There were some issues with initial setup. It wasn't as smooth as we had thought. We ran into a network issue, a firewall issue, things like that. It wasn't something we could not fix. We worked with CA support and with the client's team to fix it. But there were issues, it took a lot of time to install and configure."
What is our primary use case?
We are using it to implement test data management.
It is a new implementation so there were some challenges. But so far, it has been good.
What is most valuable?
The most valuable feature is the Portal that comes with the tool. That helps make it look much more user-friendly for the users.
Also its ease of use - even for developers it's not that complicated.
It gives us the ability to
- mask the data
- sub-set the data
- synthetically generate test data
- create test data for specific business case scenarios
and more.
What needs improvement?
- Addition of more data sources.
- Make the Portal a little more user-friendly, make it even easier to configure things directly from the Portal.
For how long have I used the solution?
Less than one year.
What do I think about the stability of the solution?
It is stable, but it is not where even CA wants it to be. There have been numerous releases going on and there are still some we are waiting for. But, overall it's good.
What do I think about the scalability of the solution?
It is scalable. This particular tool is used by certain types of engineers, TDM engineers. But the recipient of the tool can be anybody so it can be scaled for as many licenses as the customer is willing to pay for. It's kind of expensive.
How are customer service and technical support?
Tech support has been very helpful.
They have been responsive as best they can. I'm assuming that they're very busy, and they are. They usually respond within the same day. And usually the requests that go to the technical support side are not that simple either, so I can understand that.
Which solution did I use previously and why did I switch?
We are partners with CA, so this was one of the strategic directions my company also wanted to take. And CA had the near-perfect solution, which we thought we should invest in, together.
How was the initial setup?
It was good. There were some issues. It wasn't as smooth as we had thought.
We ran into a network issue, a firewall issue, things like that. It wasn't something we could not fix. We worked with the CA support and with the client's team to fix it. But there were issues, it took a lot of time to install and configure.
Which other solutions did I evaluate?
We are a consulting company, so when we go to a client we do an evaluation. Often we have to tell them what about the different products we evaluated. So in this case CA TDM has competition: Informatica has a similar product called Informatica TDM; IBM has a similar product called IBM InfoSphere Optim. These are the main competitors of CA.
What other advice do I have?
When selecting a vendor the important criteria are
- ease of use
- responsiveness of the technical support
- forward-looking products. By that I mean, do they have a plan for the next three months, six months, year, not just make the product and then forgot about it.
For this particular area, test data management, because I am involved in evaluating other companies' products as well, CA so far is the leader. I personally compare each feature for all the companies we evaluate. So far CA is the number one. There is still some improvement to be done, which CA is aware of. But I think I would advise a colleague that we can start with CA.
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner.
Domain Manager at KeyBank National Association
Video Review
Enables us to incorporate automation and self-service to eliminate all of our manual efforts
Pros and Cons
- "It removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts."
- "Core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs."
- "All financial institutions are based on mainframes, so they're never going to go away. There are ppportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product."
How has it helped my organization?
The benefit is that it removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts.
What is most valuable?
Currently the data mining, complex data mining, that we do out there. Any sort of financial institution runs along the same challenges that we face in that referential integrity across all databases, and finding that one unique customer piece of information that meets all the criteria that we're looking for. All the other functions are fabulous as far as sub-setting, data creation.
What needs improvement?
I think the biggest one will be - all financial institutions are based on mainframes, so they're never going to go away. Opportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product. Certainly, it does what we need out there, but there's always opportunities for greatly improving it.
What do I think about the stability of the solution?
Stability for the past year and a half has been very good. We have not had an outage that has prevented us from doing anything. It has allowed us to connect to the critical databases that we need, so no challenges.
What do I think about the scalability of the solution?
We haven't run into any issues at this point. So far we think that we're going to be able to get where we need to. In the future, as we expand, we may have a need to increase the hardware associated with it and optimize some query language, but I think we'll be in good shape.
Which solution did I use previously and why did I switch?
We were not using a previously solution. It was all home-grown items that we did out there, so a lot of automated scripting and some performance scripting that we did, in addition to manual efforts.
As we looked at what the solutions were, some of our core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs.
What other advice do I have?
I'd rate it about an eight. It provides the functionality that we're needing. There are always opportunities for improvement and I don't ever give anyone a 10, so it's good for our needs.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Front Line Manager at a energy/utilities company with 1,001-5,000 employees
We use most of the test matching features in our testing processes
Pros and Cons
- "I like the integration a lot. We use the test matching feature and the ability to make and find data."
- "When I run my App Test, I can directly connect with TDM. I can publish all my data into one table in TDM, then run my App Test directly."
- "When we publish a lot data into the target, sometimes it is not able to handle it."
- "The relationship between cables needs to be added."
- "A lot of research, data analysis, and work needs to be done on the source system in the tool which requires a data expert. This tool definitely requires a senior person to work on this issue, as it might be a challenge for a tester."
What is our primary use case?
We use a lot of creating data mods and test matching features. We did establish the subsetting cloning process as well, but based on the client requirements. Creating data mods and using test matching features are something which we have found fits our purpose. We use most of the test matching features in our testing processes and also the integration with App Test is something we heavily use.
What is most valuable?
We can always get different sets of data and obtain the most recent data. They have production refreshers, where we can get data subsets of that data and to do our testing. We can also lock it down for a particular user's test or generate data. E.g., if the data is not there, there are multiple forms and variables through which data can go through and so we can generate it.
When I run my App Test, I can directly connect with TDM. I can publish all my data into one table in TDM, then run my App Test directly. Essentially, my one test runs like 100 sets of data, I just click one test and it runs with 100 sets of data.
I like the integration a lot. We also use the test matching feature and the ability to make and find data.
What needs improvement?
The relationship between cables needs to be added.
A lot of research, data analysis, and work needs to be done on the source system in the tool which requires a data expert. This tool definitely requires a senior person to work on this issue, as it might be a challenge for a tester.
For how long have I used the solution?
One to three years.
What do I think about the stability of the solution?
There have been major upgrades in the last couple of years. There was a stability issue, when I started to work on it. After we upgraded to 4.0, a lot of these problems were solved and there were advanced features, as well. With 4.0, the product is now really stable, and working fine.
The stability issue was that we used to log in and see errors. We had to work around them. For example, we would log through the admin, then run some queries, and log back in again. After the upgrade was done, everything now appears to be fine.
What do I think about the scalability of the solution?
When we publish a lot data into the target, sometimes it is not able to handle it. What we do normally is we create the data margins of the source and we try to publish short sets of data into the target. Many times the publish will fail. I think the reason is because of huge sets of data that we publish. I am not sure if it is the tool issue, but I have seen this a lot of times before, where it is not able to publish a huge set of data (thousands and thousands of sets of data). With a few sets, it works. When we try to publish a lot of data, every time there is a publish error. Though, I have not tried it lately, we have seen this before.
How are customer service and technical support?
Tech support for TDM is fine. I have logged maybe one or two issues for TDM. Most of our issues are for App Test.
Which solution did I use previously and why did I switch?
We did not use another solution before TDM.
How was the initial setup?
The setup was pretty straightforward.
What's my experience with pricing, setup cost, and licensing?
Know all the data requirements before buying this product. There are a lot of features for this tool, which may or may not be useful to a particular company. Make sure of what your requirements for your data are.
What other advice do I have?
I would certainly recommend this product, because of the vast variety of data it provides for testing and its different features, like subsetting and cloning. I have heard of products which can do cloning and similar stuff, but there are many additional features in this tool, which is very useful for the testing and finding defects.
We found we mostly use one or two features, so we need to be very clear on what we need before choosing a product. Test Data Manager is good, and there are a lot of advantages that you get from using the tool, especially for testing and incubating with the App Test, which is something that we use a lot.
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner.
Quality Assurance at a logistics company with 1,001-5,000 employees
With synthetic data generation, we can test applications with three or four times the production load. We would like to see it generate synthetic data for non-relational DBs.
What is most valuable?
One of the most valuable features to us is synthetic data generation. We generate a lot of synthetic data for our performance testing and bulging up our performance environment to see how much load they can sustain. We've been doing it for relational data structures.
At a recent conference, I was talking to the product management team. We have a big use case for synthetic data generated for non-relational data structures. They have it on their road map, but we would love to see that coming out very soon. With modernization, relational databases are going away and the non-relational databases are coming up. That's a big use case for us, especially with the Grav database. We have a big, huge Grav database. We need to generate a lot of synthetic data for that.
How has it helped my organization?
It has really changed the culture in the company because nobody could ever imagine generating millions of records. Even production systems have just a couple of million records. When you want to test your applications with three or four times the production load, you can never actually achieve it because there is no other way besides synthetic data generation. You can’t have that volume of data in your DBs. Even if you subset your entire production, you would get just one X of it. To get three or four X of it, you have to go to either data cloning or to synthetic data generation.
What needs improvement?
The solution can really improve on non-relational data structures because that's a big industry use case which we are foreseeing, with non-relational database structures. I talk about databases. I talk about request-response pairs; the services data generation. We use it so much for virtualization. If we could create the web services request-response pairs non-relationally supporting GET, POST, and so on; that would be a big win for us.
For how long have I used the solution?
I've been using CA Test Data Manager since it was first released as Datamaker about 2.5 years ago. I've been using it pretty regularly since then. It has undergone a big, big transformation. There is a lot of good stuff coming up.
What do I think about the stability of the solution?
We still use the old tech line version of it, but we have seen the demos as it's moving to the web interface. I think its going to be very stable going down the line.
What do I think about the scalability of the solution?
It is not very scalable because even to generate maybe a couple of million records, it takes six to seven hours. If cloud muscle power could be included with it – like if the synthetic data generation can be done using a cloud instance; it's all synthetic data, so nothing is PII in it – if you could have a cloud feature where the data can be generated in the cloud, which might have multi-GB of RAM in memory, that would be great for us.
How is customer service and technical support?
Technical support is getting better. It's getting better and slower at the same time. That is because when I started my interaction with Grid Tools, it used to work on the bleeding edge of technology. Whatever enhancements we used to submit, the turnaround time was a couple of weeks and we would get whatever we need, whatever new features we needed. The processes were really ad-hoc. Rather than writing support tickets, you would literally reach out to somebody who you know who really works on the product. You reach out to them and they keep passing your ticket or enhancement request from person to person. Now the process is very much streamlined, but we have lost that turnaround time capability.
What other advice do I have?
When selecting a vendor, my personal requirements would be: the tools should be stable and there should be a knowledge repository for it. When you see the PPT presentation, it just gives you an introduction about the tool and it gives you the capabilities of the tool. To really get your hands dirty, you need an intense video or documentation to work on it.
I think the more webinars you do, the better. If you can record the webinars, archive them, that would be great. If you could try to solve some more complex use cases in your demos, that would be great. Most companies give you a demo of new features with zero complexity. Actually, when looking at the demo, and you are trying to solve your own use cases, you just get choked. You can't proceed any further because your use cases are really more complex than what was being shown in the demo. From the recovery aspect, if they can come up with more intense videos which shows real complex use cases, that's going to be great.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Senior Specialist at Cox Automotive
Video Review
The data masking is a powerful aspect of the tool and I have found the best success in the data generation features.
What is most valuable?
A lot of people, when they first started looking at the tool, started immediately jumping in and looking at the data masking, the data subsetting that it can do, and it works fantastically to help with the compliance issues for masking their data. That's a very powerful aspect of the tool.
But the part I found the best success in is actually the data generation features. In really investing into that concept of generating data from the get-go, we can get rid of any of those concerns right off the bat, since we know it's all made-up data in the first place.
We can fulfill the request of any team to very succinct and specific requirements for them each time. When I look at it as a whole, it's that data generation aspect that really is the big win for me.
How has it helped my organization?
When I look at the return on investment, there are not only huge financial gains on it. In fact, when I recently ran the numbers, we had about $1.1 million in savings on just the financials from 2016 alone. What it came down to is, when we started creating our data using Test Data Manager, we reduced our hours used by about 11,800 in 2016. That's real time. That's a significant, tangible benefit to the company.
When you think about it, that's somewhere around six employees that you've now saved; let alone, you have the chance to focus on all the different testing features, instead of having them worrying about where they're going to get their test data from.
What needs improvement?
It's cool that right now with this tool, they're doing a lot of things to continuously improve it. I think Test Data Management as a strategy across the whole organization, has really picked up a lot of momentum, and CA’s been intelligent to say, "We have a really great product here, and we can continue to evolve it."
Right now, they're taking everything and taking it from a desktop client and moving it into a web portal. I think there's going to be a lot of flexibility in that. If I was going to look at one thing that I am hoping they are going to improve on is – it is a great database tool – I'm not always sure about the programmatic abilities of it. Moreover, specifically, it's great in terms of referential integrity across multiple systems, multiple tables, but I do find a couple of limitations every now and then, because of trying to maintain that referential integrity; that I have to go in and try to manually make sure I want to break things.
For how long have I used the solution?
I've been using it for about two-and-a-half years at my current position, and I've actually been familiar with the tool for about the last five or six years.
What do I think about the stability of the solution?
The stability is wonderful on it. I don't think that, at any point, have I had a showstopper issue with the application. It's never caused any major issues with our systems, and I will give credit where credit's due. Even right now, as they continue to enhance the tool, it has still stayed wonderfully stable through that process, and everyone on CA’s side has been there to support on any kind of small bug or enhancement that might come up along the way.
What do I think about the scalability of the solution?
It has scaled tremendously. Especially, again, I don't want to harp back too much on it, but when you start looking at data generation, your options are endless in the way you want to incorporate that into your environment.
I have my manual testers utilizing this to create data on the fly at any moment. I have my automation users who are going through a little bit more of it, getting daily builds sent to them. I have more performance guys sending requests in for hundreds of thousands of records at any given time, that might have taken them two weeks to build out before, that I can now do in a couple hours. It ties in with our pipelines out to production.
It's a wonderful tool when it comes to the scalability.
How are customer service and technical support?
Any time that I've had something that I question and said, "Could this potentially be a bug," or even better, "I would love this possible enhancement", it's been a quick phone call away or an email. They respond immediately, every single time, and they communicate with me, look at what our use case is on the solutions, and then come up with an answer for me, typically on the spot. It's great.
Which solution did I use previously and why did I switch?
We knew we needed to invest in a new solution because our company was dealing with a lot of transformations. Not only do we still have a large root in our legacy systems, that are the iSeries, DB2-type of systems, but we have tons and tons of applications that have been built on a much larger scale in the past 40 years, since the original solutions were rolled out. Not only did we have a legacy transition occurring within our own company, but we also changed the way that our teams were built out. We went from teams that were a waterfall, iterative, top-down approach, to a much more agile shop.
When you look at the two things together, any data solution that we were using before, maybe manual hands on keyboards, or automated scripts for it, just weren't going to cut it anymore. They weren't fast enough, and able to react enough. We started looking at it and realized that Test Data Manager by CA was the tool that could actually help to evolve that process for us.
When selecting a vendor, I wanted someone that I'm going to have actually some kind of personal relationship with. I realized that we can't always have that with everyone that we're working with, but CA has done a wonderful job of continuously reaching out and saying, “How are you doing? How are you using our product? How do you plan on using our product? Here's what we’re considering doing. Would that work for you?" They've been a wonderful partner, in terms of communication of the road map of where this is all going.
How was the initial setup?
It's a great package that they have out there. It's a plug-and-play kind of system, so it executes well on its own to get up and running in the first place. When they do send releases in, it's as simple as loading the new release.
What's kind of neat about it is, if they do have something that needs to be upgraded on an extension of the system, some of the repositories and things like that, it's smart enough to actually let you know that needs to happen. It's going to shut it down, take care of it itself, and then rebuild everything.
Which other solutions did I evaluate?
We evaluated other options when we first brought it in. We looked at a couple of the others. The reason that we ended up choosing Test Data Manager was that it was stronger, at the time at least, in its AS/400 abilities, which is what all of our legacy systems are built on. It was much more advanced than anything else that we were seeing on the market.
What other advice do I have?
It’s not something that I would often give, but I do give this a perfect rating. We've been able to solve any of the data issues that we were having initially when we first brought it in, and it's expanded everything that we can do as we looked into the future right now of where we want to go with this. That includes its tie-ins for service virtualization; that includes the way that we can build out our environments in a way that we'd never considered before. It's just always a much more dynamic world that we can react a lot faster to, and attribute most all of that to Test Data Manager.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros
sharing their opinions.
Updated: December 2024
Popular Comparisons
Informatica Test Data Management (TDM)
BMC Compuware File-AID
IBM InfoSphere Optim Test Data Management (TDM)
BMC Compuware Topaz Enterprise Data
Buyer's Guide
Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- When evaluating Test Data Management, what aspect do you think is the most important to look for?
- Looking for recommendations for a service contract to de-identifiy patient data in databases.
- CA TDM vs. Delphix TDM
- Which would you choose, Informatica Test Data Management (TDM) or Collibra Catalog for data subset validation?
- IBM InfoSphere Optim vs. Informatica TDM
- Which solution would you choose: BMC Compuware or IBM Optim Test Data Management?
- Why is Test Data Management important for companies?