Try our new research platform with insights from 80,000+ expert users
it_user572907 - PeerSpot reviewer
Senior Specialist at Cox Automotive
Video Review
Vendor
The data masking is a powerful aspect of the tool and I have found the best success in the data generation features.

What is most valuable?

A lot of people, when they first started looking at the tool, started immediately jumping in and looking at the data masking, the data subsetting that it can do, and it works fantastically to help with the compliance issues for masking their data. That's a very powerful aspect of the tool.

But the part I found the best success in is actually the data generation features. In really investing into that concept of generating data from the get-go, we can get rid of any of those concerns right off the bat, since we know it's all made-up data in the first place.

We can fulfill the request of any team to very succinct and specific requirements for them each time. When I look at it as a whole, it's that data generation aspect that really is the big win for me.

How has it helped my organization?

When I look at the return on investment, there are not only huge financial gains on it. In fact, when I recently ran the numbers, we had about $1.1 million in savings on just the financials from 2016 alone. What it came down to is, when we started creating our data using Test Data Manager, we reduced our hours used by about 11,800 in 2016. That's real time. That's a significant, tangible benefit to the company.

When you think about it, that's somewhere around six employees that you've now saved; let alone, you have the chance to focus on all the different testing features, instead of having them worrying about where they're going to get their test data from.

What needs improvement?

It's cool that right now with this tool, they're doing a lot of things to continuously improve it. I think Test Data Management as a strategy across the whole organization, has really picked up a lot of momentum, and CA’s been intelligent to say, "We have a really great product here, and we can continue to evolve it."

Right now, they're taking everything and taking it from a desktop client and moving it into a web portal. I think there's going to be a lot of flexibility in that. If I was going to look at one thing that I am hoping they are going to improve on is – it is a great database tool – I'm not always sure about the programmatic abilities of it. Moreover, specifically, it's great in terms of referential integrity across multiple systems, multiple tables, but I do find a couple of limitations every now and then, because of trying to maintain that referential integrity; that I have to go in and try to manually make sure I want to break things.

For how long have I used the solution?

I've been using it for about two-and-a-half years at my current position, and I've actually been familiar with the tool for about the last five or six years.

Buyer's Guide
Broadcom Test Data Manager
January 2025
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.

What do I think about the stability of the solution?

The stability is wonderful on it. I don't think that, at any point, have I had a showstopper issue with the application. It's never caused any major issues with our systems, and I will give credit where credit's due. Even right now, as they continue to enhance the tool, it has still stayed wonderfully stable through that process, and everyone on CA’s side has been there to support on any kind of small bug or enhancement that might come up along the way.

What do I think about the scalability of the solution?

It has scaled tremendously. Especially, again, I don't want to harp back too much on it, but when you start looking at data generation, your options are endless in the way you want to incorporate that into your environment.

I have my manual testers utilizing this to create data on the fly at any moment. I have my automation users who are going through a little bit more of it, getting daily builds sent to them. I have more performance guys sending requests in for hundreds of thousands of records at any given time, that might have taken them two weeks to build out before, that I can now do in a couple hours. It ties in with our pipelines out to production.

It's a wonderful tool when it comes to the scalability.

How are customer service and support?

Any time that I've had something that I question and said, "Could this potentially be a bug," or even better, "I would love this possible enhancement", it's been a quick phone call away or an email. They respond immediately, every single time, and they communicate with me, look at what our use case is on the solutions, and then come up with an answer for me, typically on the spot. It's great.

Which solution did I use previously and why did I switch?

We knew we needed to invest in a new solution because our company was dealing with a lot of transformations. Not only do we still have a large root in our legacy systems, that are the iSeries, DB2-type of systems, but we have tons and tons of applications that have been built on a much larger scale in the past 40 years, since the original solutions were rolled out. Not only did we have a legacy transition occurring within our own company, but we also changed the way that our teams were built out. We went from teams that were a waterfall, iterative, top-down approach, to a much more agile shop.

When you look at the two things together, any data solution that we were using before, maybe manual hands on keyboards, or automated scripts for it, just weren't going to cut it anymore. They weren't fast enough, and able to react enough. We started looking at it and realized that Test Data Manager by CA was the tool that could actually help to evolve that process for us.

When selecting a vendor, I wanted someone that I'm going to have actually some kind of personal relationship with. I realized that we can't always have that with everyone that we're working with, but CA has done a wonderful job of continuously reaching out and saying, “How are you doing? How are you using our product? How do you plan on using our product? Here's what we’re considering doing. Would that work for you?" They've been a wonderful partner, in terms of communication of the road map of where this is all going.

How was the initial setup?

It's a great package that they have out there. It's a plug-and-play kind of system, so it executes well on its own to get up and running in the first place. When they do send releases in, it's as simple as loading the new release.

What's kind of neat about it is, if they do have something that needs to be upgraded on an extension of the system, some of the repositories and things like that, it's smart enough to actually let you know that needs to happen. It's going to shut it down, take care of it itself, and then rebuild everything.

Which other solutions did I evaluate?

We evaluated other options when we first brought it in. We looked at a couple of the others. The reason that we ended up choosing Test Data Manager was that it was stronger, at the time at least, in its AS/400 abilities, which is what all of our legacy systems are built on. It was much more advanced than anything else that we were seeing on the market.

What other advice do I have?

It’s not something that I would often give, but I do give this a perfect rating. We've been able to solve any of the data issues that we were having initially when we first brought it in, and it's expanded everything that we can do as we looked into the future right now of where we want to go with this. That includes its tie-ins for service virtualization; that includes the way that we can build out our environments in a way that we'd never considered before. It's just always a much more dynamic world that we can react a lot faster to, and attribute most all of that to Test Data Manager.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user752190 - PeerSpot reviewer
Senior System Engineer at a comms service provider with 10,001+ employees
Vendor
Can mask data according to your needs and statistical distribution
Pros and Cons
  • "The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different. On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe."
  • "We are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster."

What is our primary use case?

Data masking, exactly what this tool is created for. We are going to use it for the incorporation into test or development environments.

We are managing a lot of customer data, and the idea is to not have, or approve, or give a lot of permissions to read all this data. We need to mask them, but we still need to work with them, which means that developers need access to a lot of data.

We have needed a tool where the data provided for developers should be easy and anonymized. This is probably the one and only tool with so many sophisticated features. We need those features for masking/anonymizing data with statistical distribution and with preparation of test/dev data (a lot of data).

How has it helped my organization?

This tool is super fast and it has solved many of our issues. It is also much better than many other solutions which are on the market. We've already tested different ones, but this one looks the best currently.

We can deliver, first, securely; second, safely; and third, without extra permissions. We don't need to go through a whole procedure so that developers have permission to access production data. It's not needed anymore. And it will work with production data because it's almost the same data but, of course, not real. The structure of the data is the same and the context of the data is the same but the values are different.

The features are very technical and are definitely what we need. We've got some rules, especially from security, from compliance, but we need to take care of our customer data, very securely, and subtly. There is no other product that gives you these opportunities.

What is most valuable?

  • Masking of data. 
  • There are lots of filters, templates, vocabularies, and functions (which are very fast) to mask data according to your needs and statistical distribution, too.

The functionality of this tool is something that changed our work. We need to manage the data, and for developers to work on actual data. On the other hand, you don't want to give this data to the developers because they are customer data that developers shouldn't see. This tool can deliver an environment which is safe for developers. Developers can work on a big amount of data, proper data, actual data, but despite the fact that they are actual, they are not true, because they are masked. For the developer, it's absolutely proper because instead of a customer's date of birth, he's got a different date of birth, which mean its actual data but not the exact data, it's already masked. 

The whole process is done by functions which are compiled on the source environment itself. Normally, you take the data from the source, you manage them - for example, mask them - and then you load this masked data into the destination. With this solution, it's completely different.

On the source environment, there are functions compiled inside the environment, which means they are amazingly fast and, on the source environment, data are masked already. So when you take them, you already take masked data from the source. So you can copy them, even with an unencrypted pipe.

These are two pros you cannot find anywhere. Most tools - for example, Informatica - are taking data as they are, in the original, not masked form, then on the Informatica server you need to mask them, and then you're sending them to the destination. Here, in TDM, you already take masked data.

What needs improvement?

If you want to automate something, you need to figure it out. There is no easy way (software is only for Windows). I am missing a lot of terminal tools, or API for the software.

The software is working on Windows and, from some perspectives, that might be a problem. From our perspective, it is a problem because we need to have a different team to deploy for our Windows machines. This is a con from our perspective. Not a big one, but still.

They have already improved this product since our testing of it, so it may be that the following no longer applies.

The interface is definitely one you need to get used to. It's not like a current interface which is really clear, easy to check. It's like from those days, some time ago, an interface that you need to get to know.

Also, we are using a specific database. We are not using Oracle or SQL, Microsoft. We are using Teradata. There are some things that they don't have in their software. For example, when delivering data, they are not delivering them in the fastest possible way. There are some things which are faster.

We asked CA if there would be any possibility to implement our suggestions and they promised us they would but I haven't seen this product for some time. Maybe they are already implemented. The requests were very specifically related to the product we have, Teradata. This was one of the real issues. 

Overall, there was not much, in fact, to improve.

For how long have I used the solution?

Less than one year.

What do I think about the stability of the solution?

We didn't face any issues with stability.

The only problems we had, and we asked CA to solve, were some very deep things related to our products. It was not core issues, in fact. It was, '"We would like to have this because it's faster, or that because it's more robust or valuable."

What do I think about the scalability of the solution?

I cannot answer because we only did a PoC, so I have no idea how it will work, if there will be a couple of designers working with the stool.

Still, I don't see any kind of issues because there will be only a few people working with the design of masking and the rest will be done on the scripting level, so it's possible we won't see it at all. 

How are customer service and technical support?

During the PoC we had a support person from CA assigned to us who helped in any way we needed.

Which solution did I use previously and why did I switch?

We didn't use any other resolution, we simply needed to have it implemented and we tried to figure it out. We looked at the market for what we could use. TDM was our very first choice.

How was the initial setup?

I didn't do the setup by myself, it was done by a person from CA. It didn't look hard. It looked pretty straightforward, even with configuration of the back-end database.

Which other solutions did I evaluate?

After doing our PoC we tried to figure out if there was any other solution which might fit. We tried and, from my perspective, because I was responsible for the whole project, there was no solution we might use in the same way or in a similar way. This product exactly fits our compliance and security very tightly, which is important.

There aren't any real competitors on the market. I think they simply found a niche and they started to develop it. We really tried, there are many options out there, but there are some features only specific to this product and there are features you might need, if you, for example, work for a big organization. And these features aren't in any other product.

There are many solutions for masking data, there are even very basic Python modules you can use for masking data but you need to take data from the source, you need to mask them, and you need to deliver the data to the destination. If you have a big organization like ours, and you have to copy one terabyte of data, it will take hours. With this solution, this terabyte is done in a couple of minutes.

What other advice do I have?

We did a proof of concept with TDM to see if the solution fits our needs. We did it for a couple of months, did some testing, did some analysis, and tried to determine if it fit our way of working. Now we are going to implement it in production.

If there is a big amount of data to mask and you need to deliver it conveniently, pretty easily, there is no other solution. Configuration is easy. It's built slightly differently, the design is slightly different than any other tool, but the delivery of the masked data is much smoother than in any other solution. You don't need to use something like a stepping stone. You don't need to copy data to some place, then mask it, and then send it, because you copy data which is already masked. Data is masked on the fly, before they are copied to the destination. You don't need anything like a server in the middle. In my opinion, this is the biggest feature this software has.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Broadcom Test Data Manager
January 2025
Learn what your peers think about Broadcom Test Data Manager. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
PeerSpot user
Senior Test Data Management Specialist at a transportation company with 10,001+ employees
Real User
Leaderboard
We have moved data creation from manual or limited and costly automated processes to a set of weekly data builds and an On-Demand offering capable of delivering versatile data.

What is most valuable?

Synthetic Data Creation with use of flexible built in data functions.

How has it helped my organization?

We have moved data creation from manual or limited and costly automated processes to a set of weekly data builds and an On-Demand offering capable of delivering versatile data that meets the needs of our many teams.

What needs improvement?

An increase in the types of programmatic capabilities could allow the tool to be more powerful. For instance, often data inserts in one table are contingent upon entries or flags from another. In these situations, there is no way to choose to include/exclude a row based on the primary table.

For how long have I used the solution?

2.5 years

What was my experience with deployment of the solution?

The tool installs in a snap and includes test repositories that allow for new users to start working with the application immediately.

What do I think about the stability of the solution?

The stability of the tool has never been an issue. Any time a possible defect has surfaced, the support team was quick to respond. Beyond that, there have been constant new versions created that provide optimizations.

What do I think about the scalability of the solution?

The many databases supported and data delivery formats available provide a seemingly endless supply of options to meet the ever growing demand of our testing teams.

How are customer service and technical support?

Customer Service:

Above and beyond that of any company that I’ve worked with before. I’ve never been more than an hour or two without a response to a standard ticket creation.

Technical Support:

Also above the standard. Those that support TDM have an intimate knowledge with the product and it’s many available use cases.

Which solution did I use previously and why did I switch?

All previous solutions were homegrown and they missed the complete solution we found in CA’s TDM.

What about the implementation team?

Our process had ups and downs as we tempted to get TDM off the ground. The winning combination for us was TDM experts from a vendor-partner, Orasi Software, Inc. working hand in hand with employees that had an intimate knowledge with our systems.

Which other solutions did I evaluate?

This tool had been purchased by another group in our company but it’s potential was not realized.

What other advice do I have?

Our biggest wins in implementing this tool was to start out by working with a singular team and find some data delivery wins. After that internal proof of concept was realized, expansion to other teams became much more simple.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
it_user779256 - PeerSpot reviewer
Solutions Architect at American Express
Real User
Allows me to generate and manage synthetic data, but the interface could be better
Pros and Cons
  • "It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results."

    What is our primary use case?

    Generate synthetic test data.

    It has performed fine. It provides us the capabilities that we were anticipating.

    How has it helped my organization?

    It allows us to create a testing environment that is repeatable. And we can manage the data so that our testing becomes automated, everything from actually performing the testing to also evaluating the results. We can automate that process. Plus, we're no longer using production data.

    What is most valuable?

    1. I am able to maintain metadata information based off of the structures and 
    2. I am able to generate and manage synthetic data from those.

    What needs improvement?

    The interface based, on our unique test case - because we are extremely unique platform - could be better. We have to do multiple steps just to create a single output. We understand that, because we are a niche architecture, it's not high on their list, but eventually we're hoping it becomes integrated and seamless.

    As noted in my answer on "initial setup", I would like to see that I don't have to do three steps, rather that it's all integrated into one. Plus, I'd like to know more about their API, because I want to be able actually call it directly using an API, and pass in specific information so that I can tune the results to my specific needs for that test case. And actually make it to where I can do it for multiple messages in one call.

    What do I think about the stability of the solution?

    Stability is fine. It's stable. It's not like it crashes or anything like that, because it's just a utility that we use to generate data. Once we generate the data, we capture it and maintain it. We don't use the tool to continually generate data, we only generate it for the specific test case, and then don't generate it again. But it gives us the ability to handle all the various combinations of variables, that's the big part.

    What do I think about the scalability of the solution?

    For our platform, scalability probably isn't really an issue. We're not planning on using it the way it was intended because we're not going to use it for continually generating more data. We want to only generate specific output that we will then maintain separately and reuse. So, the only time we will generate anything is anytime there is a different test case needed, a different condition that we need to be able to create. So, scalability is not issue.

    How are customer service and technical support?

    Tech support is great. We've had a couple of in-house training sessions. It's coming along fine. We're at a point now where were trying to leverage some other tools, like Agile Designer, to start managing the knowledge we're starting to capture, so that we can then begin automating the construction of this component with Agile Designer as well.

    Which solution did I use previously and why did I switch?

    We didn't have a previous solution.

    How was the initial setup?

    The truth is that I was involved in setup but they didn't listen to me. "They" are other people in the company I work for. It wasn't CA that did anything right or wrong, it was that the people that decided how to set it up didn't understand.  So we're struggling with that, and we will probably transition over. Right now we have it installed on laptops, and it shouldn't be. It should be server based. We should have a central point where we can maintain everything.

    So, the set up is fairly straightforward, except for the fact that there are three steps that we have to go through. We have to do a pre-setup, a pre-process, then we can do our generation of our information, and then there's a post-process that we have to perform, only because of the unique characteristics of our platform.

    Which other solutions did I evaluate?

    In addition to CA Test Data Manager, we evaluated IBM InfoSphere Optim. Those were the two products that were available to our company at the time when I proposed the idea of using it in this way.

    We chose CA because they had the capability of doing relationship mapping between data variables.

    What other advice do I have?

    The most important criterion when selecting a vendor is support. And obviously it comes down to: Do they offer the capabilities I'm interested in at a reasonable price, with good support.

    I rate it at seven out of 10 because of those three steps I have to go through. If they get rid of those, make it one step, and do these other things, I'd give it a solid nine. Nothing's perfect.

    For my use, based on the products out there that I have researched, this is the best one.

    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    it_user572823 - PeerSpot reviewer
    AVP Quality Assurance at GM Financial
    Video Review
    Real User
    Gives you confidence in data that you're creating and keeps you out of the SOX arena, because there's no production data within that environment.

    What is most valuable?

    Test Data Manager allows you to do synthetic data generation. It gives you a high level of confidence in your data that you're creating. It also keeps you out of the SOX arena, because there's no production data within that environment. The more that you can put in controls and keep your data clean, the better off you are. There are some laws coming into effect in the next year or so that are going to really scrutinize production data being in the lower environments.

    How has it helped my organization?

    We have certain aspects of our data that we have to self-generate. The VIN number is one that we have to generate and we have to be able to generate on the fly. TDM allows us to generate that VIN number based upon whether it's a truck, car, etc. We're in the car, auto loan business.

    What needs improvement?

    I would probably like to see improvement in the ease of the rule use. I think sometimes it gets a little cumbersome setting up some of the rules. I'd like to be able to see a rule inside of a rule inside of a rule; kind of an iterative process.

    What do I think about the stability of the solution?

    TDM has been around for a couple of years. I used it at my previous company, as well. It's been really stable. It's a tool that probably doesn't get utilized fully. We intend on taking that, partnering it with the SV solution and being able to generate the data for the service virtualization aspect.

    What do I think about the scalability of the solution?

    Scalability is similar along the SV lines; it's relatively easy to scale. It's a matter of how you want to set up your data distribution.

    How are customer service and technical support?

    We were very pleased with the technical support.

    Which solution did I use previously and why did I switch?

    When you have to generate the amount of loan volume that we need – 50 states, various tax laws, etc. – I needed a solution that I can produce quality data that fits the target testing we need; any extra test cases; etc. We’re more concentrated on being very succinct in the delivery and the time frame that we need to get the testing done in.

    I used CA in my previous company. I have prior working relationship with them.

    How was the initial setup?

    The initial setup was done internally. Obviously, the instructions that were online when we downloaded it, we were able to follow those and get the installation done. We did have a couple of calls into the technical solution support area and they were able to resolve it fairly quick.

    What other advice do I have?

    I think from my synthetic generation, a lot of times generating synthetic data can be cumbersome. TDM, with some of the rules aspect of it, you can generate it and have your rules in place that you know your data's going to be very consistent. When we want a particular loan to come through with a particular credit score, we can generate the data. We can select and generate the data out of TDM that will create me a data file for my in-front script, through using DevTest.

    I also push the service virtualization record to respond to the request of the loan, hitting the credit bureau, returning a certain credit score, which then gets us within that target zone for that loan we're looking for, to trigger a rule.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    PeerSpot user
    IT Specialist at a financial services firm with 1,001-5,000 employees
    Real User
    Leaderboard
    Masks and generates data while obeying the relationships in our relational databases
    Pros and Cons
    • "The data generation is one of the most valuable features because we are able to write a lot of rules. We have some specific rules here in Turkey, for example, Turkish ID IBAN codes for banks."
    • "There are different modules for masking. There is a portal and there is a standalone application as well. The standalone application is more old-fashioned. When you write rules on this old-fashioned interface, because it has more complex functions available for use, you can't migrate them to the portal."

    What is our primary use case?

    We use it for data generation, for performance testing, and other test cases. We also use data masking and data profiling for functional testing. Data masking is one of the important aims in our procurement of this tool because we have some sensitive data in production. We have to mask it to use it in a testing environment. Our real concern is masking and we are learning about this subject.

    How has it helped my organization?

    CA TDM is valuable for us because we use relational databases where it's problematic to sustain the relationships, foreign keys, and indexes. TDM obeys all the relationships and does the masking and data generation according to those relationships. 

    Also, the testing team is using TDM to write the rules. Using this tool, our knowledge of data discovery skills has increased. That is an advance for our company.

    In terms of performance testing, before TDM, preparing the data and data generation took a week for 20,000 sets of data. Now, with TDM, it takes just one day, which is great. We haven't had much experience with masking yet, we are in the adaptation phase, but data generation has increased our performance by about 60 percent.

    What is most valuable?

    The tool has strong data generation functions. When we needed special function that is not in the list, the support team has generated these functions and added with patches in a limited time frame.

    For performance testing, we needed large amounts of data. The effort for data generation for this purpose has also decreased specifically.

    Depending on security politicies and regulations we have to obey, we needed masked production data for testing. With the help of this tool, considering data integrity we can mask the data in a variety of ways (like shuffling, using seed list, using functions etc.)

    What needs improvement?

    There are different modules for masking. There is a portal and there is a standalone application as well. The standalone application is more old-fashioned. When you write rules on this old-fashioned interface, because it has more complex functions available for use, you can't migrate them to the portal. 

    We also have some security policies in our company that needed adaptation. For example, the people writing the rules would see all the production data, which is a large problem for us. It would be helpful if there was an increase in the ability to apply security policies.

    For how long have I used the solution?

    One to three years.

    What do I think about the stability of the solution?

    The tool is stable. This was one of the reasons that we chose it. We haven't had an issue with any unknown problems or issues, so it has paid off.

    What do I think about the scalability of the solution?

    Scalability is a matter of how you use your systems. Our requirements required using it for MS SQL Server, Db2, and LUW Db2. We scaled the tool with all the databases we have, so it's scalable.

    How are customer service and technical support?

    Technical support is okay. We haven't had many issues lately, but we had a bug at the proof of concept stage and they solved it.

    Which solution did I use previously and why did I switch?

    We did not have a previous solution.

    How was the initial setup?

    The initial setup was straightforward. One of CA's consultants came to our company and did the installation in about two days. We use mainframes here, and mainframes are very complex. Still, the consultant did it in two days.

    What about the implementation team?

    We worked with a CA consultant to do all the adaptation over the course of about two months. We were happy with him.

    What's my experience with pricing, setup cost, and licensing?

    Part of the licensing is dependent on whether you want to use the portal. It's based on floating users. The other part is dependent on what type of system you are using. We are using mainframe, so we paid good money for a mainframe license. It's okay because, for us, the main work of this tool is on those systems. The mainframe is a critical system, so the cost is okay.

    Which other solutions did I evaluate?

    We looked at IBM Optim and Informatica TDM.

    What other advice do I have?

    It's important to know the requirements of your system, for example, the security policies you have to observe. The requirements may include a concern about relational or other database systems. You have to know your systems. Depending on your system, consider using one or more consultants, because we had a problem just using one. Also, compare all the tools by doing proofs of concept. That's important.

    We have been using it for three months, but before that we also did a proof of concept in stages for about a year.

    Regarding future use, we plan to use it in automation testing with content integration tools. Before running the automated tests, we will prepare our generated data with TDM. We also have a future plan for storage virtualization and use of Docker applications. It is possible that for Docker we would also use the TDM rule set. I want to believe it's scalable.

    We have five testers using it to write rules. We also have 20 business analysts using and running these rules. In terms of maintenance, two developers would be enough. Our consultant coached our developers regarding our requirements. A testing engineer would also be okay for maintenance.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    it_user797949 - PeerSpot reviewer
    Domain Manager at KeyBank National Association
    Video Review
    Real User
    Enables us to incorporate automation and self-service to eliminate all of our manual efforts
    Pros and Cons
    • "It removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts."
    • "Core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs."
    • "All financial institutions are based on mainframes, so they're never going to go away. There are ppportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product."

    How has it helped my organization?

    The benefit is that it removes manual intervention. A lot of the time that we've spent previously was always manually, as an individual running SQL scripts against databases, or manually going through a UI to create data. These solutions allow us to incorporate automation and self-service to eliminate all of our manual efforts.

    What is most valuable?

    Currently the data mining, complex data mining, that we do out there. Any sort of financial institution runs along the same challenges that we face in that referential integrity across all databases, and finding that one unique customer piece of information that meets all the criteria that we're looking for. All the other functions are fabulous as far as sub-setting, data creation.

    What needs improvement?

    I think the biggest one will be - all financial institutions are based on mainframes, so they're never going to go away. Opportunities to increase functionality and efficiencies within the mainframe solution, within this TDM product. Certainly, it does what we need out there, but there's always opportunities for greatly improving it.

    What do I think about the stability of the solution?

    Stability for the past year and a half has been very good. We have not had an outage that has prevented us from doing anything. It has allowed us to connect to the critical databases that we need, so no challenges.

    What do I think about the scalability of the solution?

    We haven't run into any issues at this point. So far we think that we're going to be able to get where we need to. In the future, as we expand, we may have a need to increase the hardware associated with it and optimize some query language, but I think we'll be in good shape.

    Which solution did I use previously and why did I switch?

    We were not using a previously solution. It was all home-grown items that we did out there, so a lot of automated scripting and some performance scripting that we did, in addition to manual efforts. 

    As we looked at what the solutions were, some of our core features that we needed were synthetic data creation, and to be able to do complex data mining and profiling across multiple databases with referential integrity intact across them. CA's product actually came through with the highest score and met the most of our needs.

    What other advice do I have?

    I'd rate it about an eight. It provides the functionality that we're needing. There are always opportunities for improvement and I don't ever give anyone a 10, so it's good for our needs.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    PeerSpot user
    Practice Manager (Testing Services) at a financial services firm with 1,001-5,000 employees
    Video Review
    Vendor
    Includes basic services which allow you to mask data and create synthetic data. It also includes test matching which accelerates test cycles and allows automation to happen.

    What is most valuable?

    You've got the basic services of the TDM tool which allows you to mask data, it allows you to create synthetic data, but I think what really sets TDM apart from the other competitors, is the kind of the added extras that you get with doing true test data management, so you've got things like the cubing concepts that are grid tools, or data maker, kind of really brings to bear within test data management teams. You've also got test matching as well which massively accelerates test cycles and really gives stability and allows automation to happen.

    How has it helped my organization?

    We've got a centralized COE in terms of test data management within our organization, benefits that are really three fold in terms of cost, quality and time to market. In terms of the quality we through test data management where, data is kind of the glue that holds systems together and therefore, if I understand my test data, I understand what I'm testing. Through the tooling and kind of the maturity in the tooling we're really bringing an added quality aspect in terms of what we test and how we test, and the risk based testing that we might approach.

    In terms of the speed to market, because we don't manually produce data anymore, we use intelligent profiling techniques, test data matching, we massively reduce the time we spend finding data, and we also can produce data on the fly, which turns around test data cycles. In terms of cost, because we're doing it a lot quicker, it's a lot cheaper.

    We have a centralized test data management team that caters for all development within my organization. We've created an organization that is so much more effective and optimized in terms of the kind of the time to get to test execution, to identify data and get into execution in the right way.

    What needs improvement?

    I think the kind of the big area for exploitation for us is already a feature that already exists within the tool. The TCO element is something massive, I talked earlier on about the kind of the maturity and the structure that it gives you to testing. I think this is kind of a game changer in terms of articulating impact of change and no project goes swimmingly first time and therefore the ability to impact a test through kind of a making simple process changes is a massive benefit.

    What do I think about the stability of the solution?

    The stability of the solution is really fine. I think the really big question is the stability of underlying system that it's trying to manipulate and the tool is the tool, it does what it needs to do.

    What do I think about the scalability of the solution?

    Within our organization we have many, many platforms, many, many different technologies. One of the interesting challenges we always have is in terms of, especially when we're doing performance testing, can we get the kind of the volumes of data in sufficient times, and we use things like data explosion quite often and it does what it needs to do and it does it very quickly.

    How are customer service and technical support?

    We work in an organization where we use many tools from many different suppliers. I think that the kind of a relationship that my organization has with CA is kind of a much richer one in terms of, you know, it's not just a tool support.

    Which solution did I use previously and why did I switch?

    Originally we used to spend probably, hours and hours and hours of spreadsheet time manually creating, keying data, massively inefficient, massively error prone, and clearly as part of a financial institution we need to conform to regulations. Therefore we needed an enterprise solution to make sure that we could actually deliver a regulatory data, test data, to suit our projects.

    The initial driver with kind of really buying any tooling initially is kind of what's the problem statement, what's the driver to get these things in? I think once you realize that there is so much more than just the regulatory bit, as I say, the time, cost, quality aspect that it can actually give to test, that's really the kind of the bigger benefit than just regulatory.

    How was the initial setup?

    We've had the tool for about four or five years now within the organization. As you might expect we first got the guys in not knowing anything about the tool and not really knowing how to deploy it, therefore what we needed to do was we called on the CA guys to come in and really to show us how the tool works, but also how to manipulate that within our organization. We had a problem case that we wanted to address, we used that as the proving item, and that's really where we started our journey in terms of a dedicated test data management function.

    Which other solutions did I evaluate?

    Important evaluation criteria: to be honest it's got to be around what does the tool do? A lot of the tools on the market do the same thing, whether there are things that differentiate those tools, and what's really the organization's problem statement they're trying to fulfill. Once you've got the tool, that's great, but you need the people and process, and without that, it comes back to the relationship that you have with the CA guys, you've just got to shelfwear and a tool. We went through a proper RFP selection process where we kind of set our criteria and we kind of we invited a few of the kind of the vendors into come and demonstrate what they could do for us and picked the one that was best suited to us.

    What other advice do I have?

    Rating: no one's perfect. You got to go in the top quartile of it, so you're probably eight upwards. I think in terms of test data management solutions, it's the best out there. I think that the way that tool is going it's kind of moving into other areas in TCO and kind of the integration with SV, it's a massive thing for us.

    I think the recommendation is that absolutely this is kind of the best in breed. As well as buying the tool, it would be a mistake to not also invest in kind of understanding how the tool integrates into the organization and kind of how to bring that into the kind of the tools team, the testing teams and the environment teams that you need to work with.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.
    Updated: January 2025
    Buyer's Guide
    Download our free Broadcom Test Data Manager Report and get advice and tips from experienced pros sharing their opinions.