From what I know, there are some challenges with respect to the number of data formats that can be made available. This is something that can be improved. With respect to mainframe platforms, the solution should be able to do more. Some investments or tie-ups could be done with companies like IBM, Kyndryl, and HP. They should form some strategic alliances with these mainframe companies, which would also increase their customer base.
We still use another tool called Broadcom TDM, which has more features than DATPROF, so there is room for improvement regarding functionality. DATPROF is very receptive to this as a company; they listen and try to incorporate our feedback as much as possible. The synthetic data-generating element of the tool is relatively new, so we sometimes encounter bugs, though when we inform DATPROF, they act quickly to fix any issues. Of course, the fewer bugs we face, the better.
I feel that we have to do a lot of reworking of the template implementations. We implemented two applications and have been using them for the last three years, but now we are adding two more. There is less reusability because we have to start from scratch. Maybe it is a limitation, but as a customer, it feels like we need to start from scratch every time. It would help if we could reuse at least some of what we have already done.
We use DATPROF as part of our global data refresh program. What would help us most is if DATPROF could come up with some solution to slice copy the data from the production instance to a test system, such as data for a specific period. For SAP applications, we have similar tools that support this feature. For now, we always use a full copy of production although it's not always needed, and this kills DB space.
Test Consultant at a tech company with 51-200 employees
Real User
2021-07-05T13:00:00Z
Jul 5, 2021
The source and target data model need to be inline (by and large). For DevOps or CICD, it would be lovely to be able to enhance the option to write the subset into an empty database, without any keys or indexes. That would simplify the implementation into our release pipeline. Right now, we need to make process agreements with our supplier to keep our source and target database in line with the current production version. This may become an issue when (if) we reach a CI/CD level with multiple releases per day.
After the first positive user experiences, we will continue to focus on the use of Datprof tooling more broadly in the organization. Although it is possible to mask flat files inside DATPROF Privacy, the current solution is solid. However, it can be improved with an improved user interface. With such an improvement it will help to easily mask flat files as well. Another improvement would be if the results of the discovery solutions could be more integrated into the masking or data generation solution. The current solution offers the possibility to discover privacy-sensitive information, and, they have, of course, a solution to mask or generate the test data. It would be valuable if these solutions could be integrated.
DATPROF primarily offers capabilities for subsetting test data, masking sensitive information to comply with GDPR, and generating synthetic data for testing environments.
DATPROF enables companies to reduce storage costs, anonymize data, and seamlessly integrate within CI/CD pipelines. It supports databases such as Oracle, SQLServer, MySQL, Postgres, and IBM DB2 LUW, ensuring the protection of sensitive business information while creating test databases. The tool also provides...
From what I know, there are some challenges with respect to the number of data formats that can be made available. This is something that can be improved. With respect to mainframe platforms, the solution should be able to do more. Some investments or tie-ups could be done with companies like IBM, Kyndryl, and HP. They should form some strategic alliances with these mainframe companies, which would also increase their customer base.
We still use another tool called Broadcom TDM, which has more features than DATPROF, so there is room for improvement regarding functionality. DATPROF is very receptive to this as a company; they listen and try to incorporate our feedback as much as possible. The synthetic data-generating element of the tool is relatively new, so we sometimes encounter bugs, though when we inform DATPROF, they act quickly to fix any issues. Of course, the fewer bugs we face, the better.
I feel that we have to do a lot of reworking of the template implementations. We implemented two applications and have been using them for the last three years, but now we are adding two more. There is less reusability because we have to start from scratch. Maybe it is a limitation, but as a customer, it feels like we need to start from scratch every time. It would help if we could reuse at least some of what we have already done.
We use DATPROF as part of our global data refresh program. What would help us most is if DATPROF could come up with some solution to slice copy the data from the production instance to a test system, such as data for a specific period. For SAP applications, we have similar tools that support this feature. For now, we always use a full copy of production although it's not always needed, and this kills DB space.
The product could be improved by adding functions to mask flat files and XML data.
The source and target data model need to be inline (by and large). For DevOps or CICD, it would be lovely to be able to enhance the option to write the subset into an empty database, without any keys or indexes. That would simplify the implementation into our release pipeline. Right now, we need to make process agreements with our supplier to keep our source and target database in line with the current production version. This may become an issue when (if) we reach a CI/CD level with multiple releases per day.
After the first positive user experiences, we will continue to focus on the use of Datprof tooling more broadly in the organization. Although it is possible to mask flat files inside DATPROF Privacy, the current solution is solid. However, it can be improved with an improved user interface. With such an improvement it will help to easily mask flat files as well. Another improvement would be if the results of the discovery solutions could be more integrated into the masking or data generation solution. The current solution offers the possibility to discover privacy-sensitive information, and, they have, of course, a solution to mask or generate the test data. It would be valuable if these solutions could be integrated.
We organized the orchestration via ServiceNow/Runbook. It would be nice if DatProf software had this functionality as well.