Delivery Director at a computer software company with 1,001-5,000 employees
Real User
Top 5
2023-08-09T07:32:00Z
Aug 9, 2023
The data quality assessment requires third-party components and a separate license. I would like to have better integration around the data quality. I would appreciate the inclusion of a non-structured database feature in a future release.
Project Coordinator at a computer software company with 201-500 employees
Real User
Top 20
2023-07-07T20:05:00Z
Jul 7, 2023
I and the DevOps architect think erwin Data Intelligence is a better product technically because it's more designed for a technical user. But it couldn't pass one penetration test. In the federal government, if there's one problem like that, they're not happy anymore. Also, really huge datasets, where the logical names or the lexicons weren't groomed or maintained well, were the only area where it really had room for improvement. A huge data set would cause erwin to crash. If there were half a million or 1 million tables, erwin would hang. And then, when the metadata came in, it would need a lot of manual work to clean it up.
Everything about Data Intelligence is complex. Though we've used the tool for five years, we're still only using about 30 to 40 percent of its capabilities. It would be helpful if we could customize and simplify the user interface because there are so many redundant things. Some of the features aren't being used. It's challenging to understand everything, especially if you aren't using it daily.
Works at a insurance company with 5,001-10,000 employees
Real User
Top 20
2022-12-18T06:56:00Z
Dec 18, 2022
We were fairly impressed with the Smart Data Connectors for reverse or forward engineering to automate the delivery and maintenance of data pipelines. However, our SSIS packages are extremely complex, and we pass a lot of variables. It makes it extremely hard for any kind of reverse-engineering automation. While we were impressed with what it was able to do, it wasn't great for us. The level of distraction that exists within the packages, passing parameters and variables down through several levels of containers, makes it very difficult to consume. In more complex use cases, it's hard to follow it and map all the metadata correctly. It's a little bit clunky. I think a lot of these features were bolted on, and they don't necessarily transition smoothly in the interface. I would like to see a little more cohesion.
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
We have loaded over 300,000 attributes and more than 1000 mappings. The performance is slow, depending on the lineage or search. This is supposed to be fixed in the later versions, but we haven't upgraded yet. The integration with various metadata sources, including erwin Data Modeler, isn't smooth in the current version. It took some experimentation to get things working. We hope this is improved in the newer version. The initial version we used felt awkward because Erwin implemented features from other companies into their offering.
Architect at a insurance company with 10,001+ employees
Real User
2022-08-10T05:48:00Z
Aug 10, 2022
The data quality has so many facets, but we are definitely not using the core data quality features of this tool. The data quality has definitely improved because the core data stewards, data engineers, data stewards, and business sponsors know what data they are looking for and how the data should move. They are setting up those rules. We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today.
Business Intelligence BA at a insurance company with 10,001+ employees
Real User
2021-05-24T17:09:00Z
May 24, 2021
Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times. There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI. We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.
Senior Director at a retailer with 10,001+ employees
Real User
2021-05-16T04:21:00Z
May 16, 2021
There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements.
The UI just got a real big uplift, but behind the UI, there are quite a few different integrations that go on. One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage. Some areas we found take time to process such as metadata scans, some of the management functions at a large scale do take time to process. That's an observation that we've worked with erwin support to a degree, but it seems that's just an inherent part of the scale of our particular project.
Practice Director - Digital & Analytics Practice at HCL Technologies
Real User
2020-10-14T06:37:00Z
Oct 14, 2020
I would like to see a lot more AI infusion into all the various areas of the solution. Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier. Overall, automation for associating business terms to data items, and having automatic relationship discovery, can be improved in the upcoming releases. But I'm sure that erwin is innovating a lot.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2020-08-04T07:26:00Z
Aug 4, 2020
The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested. erwin has been a fantastic partner with regard to our suggestions for enhancements, and that's why I'm having difficulty thinking of areas for improvement of the solution. They are delivering enhancements that we've requested in every release.
Data Program Manager at a non-tech company with 201-500 employees
Real User
2020-04-12T07:27:00Z
Apr 12, 2020
If we are talking about the business side of the product, maybe the Data Literacy could be made a bit simpler. You have to put your hands on it, so there is room for improvement.
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
Real User
2020-01-30T07:55:00Z
Jan 30, 2020
It does have some customization, but it is not quite as robust as erwin DM. It's not like everything can have as many user-defined properties or customized pieces as I might like. There are a lot of little things like moving between read screens and edit screens. Those little human interface type of programming pieces will need to mature a bit to make it easier to get to where you want to go to put the stuff in.
There is room for improvement in automation, no question. Also, the fact that I sometimes have to go in and out of different applications, even though it's all part of the whole erwin suite, perhaps means it could be architected a little bit better. I think they do have some ideas for improvements there. But regarding the data governance tool itself, for me there was a huge learning curve, and I'd been in software development for most of my career. The application itself, and how it runs menus and screens when you can modify and code, is complex. I have found that kind of cumbersome. I had one guy make an error and it costs us a few days because it had an impact on a whole slew of options and objects because he didn't know what he was doing. That was not their fault; it was purely my fault, allowing that to happen. For me, that was a struggle.
Solution Architect at a pharma/biotech company with 10,001+ employees
Real User
2020-01-22T07:36:00Z
Jan 22, 2020
The SDK behind this entire product needs improvement. The company really should focus more on this because we were finding some inconsistencies on the LDK level. Everything worked fine from the UI perspective, but when we started doing some deep automation scripts going through multiple API calls inside the tool, then only some pieces of it work or it would not return the exact data it was supposed to do. This is the number one area for improvement. The tool provides the WSDL API as another point to access the data. This is the same story as with the SDK. We are heavily using this API and are finding some inconsistencies in its responses, especially as we are going for more nonstandard features inside. The team has been fixing this for us, so we have some support. This was probably overlooked by the product team to focus more on the UI rather than on the API.
@reviewer1270386 I've had trouble finding information about what the erwin DI API can do. Do you have any recommendations? I know this post was for an older version, but any guidance would be greatly appreciated.
The erwin Data Intelligence Suite (erwin DI) combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog. Then make it accessible and understandable within context via...
The data quality assessment requires third-party components and a separate license. I would like to have better integration around the data quality. I would appreciate the inclusion of a non-structured database feature in a future release.
I and the DevOps architect think erwin Data Intelligence is a better product technically because it's more designed for a technical user. But it couldn't pass one penetration test. In the federal government, if there's one problem like that, they're not happy anymore. Also, really huge datasets, where the logical names or the lexicons weren't groomed or maintained well, were the only area where it really had room for improvement. A huge data set would cause erwin to crash. If there were half a million or 1 million tables, erwin would hang. And then, when the metadata came in, it would need a lot of manual work to clean it up.
Everything about Data Intelligence is complex. Though we've used the tool for five years, we're still only using about 30 to 40 percent of its capabilities. It would be helpful if we could customize and simplify the user interface because there are so many redundant things. Some of the features aren't being used. It's challenging to understand everything, especially if you aren't using it daily.
The technical support could be improved. When we had an issue, we were given vague answers that did not resolve the issue.
The solution's Arabic language processing is limited. The results are limited when you use the interface in Arabic.
We were fairly impressed with the Smart Data Connectors for reverse or forward engineering to automate the delivery and maintenance of data pipelines. However, our SSIS packages are extremely complex, and we pass a lot of variables. It makes it extremely hard for any kind of reverse-engineering automation. While we were impressed with what it was able to do, it wasn't great for us. The level of distraction that exists within the packages, passing parameters and variables down through several levels of containers, makes it very difficult to consume. In more complex use cases, it's hard to follow it and map all the metadata correctly. It's a little bit clunky. I think a lot of these features were bolted on, and they don't necessarily transition smoothly in the interface. I would like to see a little more cohesion.
We have loaded over 300,000 attributes and more than 1000 mappings. The performance is slow, depending on the lineage or search. This is supposed to be fixed in the later versions, but we haven't upgraded yet. The integration with various metadata sources, including erwin Data Modeler, isn't smooth in the current version. It took some experimentation to get things working. We hope this is improved in the newer version. The initial version we used felt awkward because Erwin implemented features from other companies into their offering.
The data quality has so many facets, but we are definitely not using the core data quality features of this tool. The data quality has definitely improved because the core data stewards, data engineers, data stewards, and business sponsors know what data they are looking for and how the data should move. They are setting up those rules. We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today.
Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times. There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI. We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.
There may be some opportunities for improvement in terms of the user interface to make it a little bit more intuitive. They have made some good progress. Originally, when we started, we were on version 9 or 10. Over the last couple of releases, I've seen some improvements that they have made, but there might be a few other additional areas in UI where they can make some enhancements.
The UI just got a real big uplift, but behind the UI, there are quite a few different integrations that go on. One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage. Some areas we found take time to process such as metadata scans, some of the management functions at a large scale do take time to process. That's an observation that we've worked with erwin support to a degree, but it seems that's just an inherent part of the scale of our particular project.
I would like to see a lot more AI infusion into all the various areas of the solution. Another area where it can improve is by having BB-Graph-type databases where relationship discovery and relationship identification are much easier. Overall, automation for associating business terms to data items, and having automatic relationship discovery, can be improved in the upcoming releases. But I'm sure that erwin is innovating a lot.
The metadata ingestion is very nice because of the ability to automate it. It would be nice to be able to do this ingestion, or set it up, from one place, instead of having to set it up separately for every data asset that is ingested. erwin has been a fantastic partner with regard to our suggestions for enhancements, and that's why I'm having difficulty thinking of areas for improvement of the solution. They are delivering enhancements that we've requested in every release.
If we are talking about the business side of the product, maybe the Data Literacy could be made a bit simpler. You have to put your hands on it, so there is room for improvement.
It does have some customization, but it is not quite as robust as erwin DM. It's not like everything can have as many user-defined properties or customized pieces as I might like. There are a lot of little things like moving between read screens and edit screens. Those little human interface type of programming pieces will need to mature a bit to make it easier to get to where you want to go to put the stuff in.
There is room for improvement in automation, no question. Also, the fact that I sometimes have to go in and out of different applications, even though it's all part of the whole erwin suite, perhaps means it could be architected a little bit better. I think they do have some ideas for improvements there. But regarding the data governance tool itself, for me there was a huge learning curve, and I'd been in software development for most of my career. The application itself, and how it runs menus and screens when you can modify and code, is complex. I have found that kind of cumbersome. I had one guy make an error and it costs us a few days because it had an impact on a whole slew of options and objects because he didn't know what he was doing. That was not their fault; it was purely my fault, allowing that to happen. For me, that was a struggle.
The SDK behind this entire product needs improvement. The company really should focus more on this because we were finding some inconsistencies on the LDK level. Everything worked fine from the UI perspective, but when we started doing some deep automation scripts going through multiple API calls inside the tool, then only some pieces of it work or it would not return the exact data it was supposed to do. This is the number one area for improvement. The tool provides the WSDL API as another point to access the data. This is the same story as with the SDK. We are heavily using this API and are finding some inconsistencies in its responses, especially as we are going for more nonstandard features inside. The team has been fixing this for us, so we have some support. This was probably overlooked by the product team to focus more on the UI rather than on the API.
@reviewer1270386 I've had trouble finding information about what the erwin DI API can do. Do you have any recommendations? I know this post was for an older version, but any guidance would be greatly appreciated.