Try our new research platform with insights from 80,000+ expert users
Analytics Delivery Manager at DXC
Real User
Value is in the accuracy, quality, and completeness of the migration source to target mapping and acceleration of development through code automation.
Pros and Cons
  • "We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation."
  • "One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage because they get locked."

What is our primary use case?

We use DI for Data Governance as part of a large system migration supporting application refresh and multi-site consolidation. Metadata Manager is utilized to harvest metadata which is augmented with custom metadata properties identifying rules criteria which drive automated source to target mapping. Custom build code generation connector then automates forward engineering code generation groovy. We've developed a small number of connectors supporting this 1:1 data migration. It's a really good product that we've been able to make very good use of.

How has it helped my organization?

This use case is a one-time system conversion solution not having life after the migration. Value is in the acceleration, accuracy, quality, and completeness of the migration source to target mapping and generated data management code.

Use case action is the extraction and staging of the source application data targeting ~700 large objects from the overall application set of ~2,400 relational tables. Each table extract has light join and selection criteria which are injected into the source metadata. The application itself is moving to a next-generation application that performs the same business function. Our client is in health and human services welfare administration in the United States. This use case doesn't have ongoing data governance for our client, at least at this point.

erwin DIS has enabled us to automate critical areas of data management infrastructure. That's where we see the benefit, in the acceleration of speed as well as the acceleration of quality and reduction of costs. 

erwin DIS generation of data management code through automated code engineering reduced the time it takes to go from initial concept to implementation for what we're in progress with right now. There is not a production delivery as of yet. That's still another year and a half out. This is a multi-year project where this use case is applied.

erwin has affected the transparency and accuracy of data movement and data integration quite a bit through the various report facilities. We can make self-service reporting available through the business user's portal. erwin DIS has provided the framework and the capability to be transparent, to have stakeholder involvement with the exercise the whole way along.

Through business user's portals and workflows, we're able to provide effective stakeholder reviews as well as then stakeholder access to all of the information and knowledge that's collected. The facility itself gives quite a few capabilities into user-defined parameters to capture data knowledge and organization change information which project stakeholders can use and apply throughout the program. Client and stakeholders utilize the business user's portal for extended visibility which is a big benefit.

We're interested in the AIMatch feature. It's something that we had worked with AnalytiX DS early on to actually develop some of the ideas for. We were somewhat instrumental in bringing some of that technology in, but in this particular case, we're not using it. 

What is most valuable?

The most valuable features include: 

  • The mapping facilities
  • All of the mapping controls workflow
  • The metadata injection and custom metadata properties for quality of mappings
  • The various mapping tools and reports that are available
  • Gap analysis
  • Model gap analysis
  • Codesets and codeset value mapping 

We use the codeset mapping quite a bit to match value pairs to use within the conversion as well. Those value pair mappings come in quite handy and are utilized quite extensively. They then feed into the automation of the source data extraction, like the source data mapping of the source data extraction, the code development, forward engineering using the ODI connector for the forward automation.

Smart Data Connectors to reverse engineer and forward engineer code from BI, ETL or Data Management Platform is where we're gaining most value. The capability is such that it's only limited by one's imagination or ability to come up with innovative ideas, to automate every idea that we've been able to come up with. We have been able to apply some form of automation to that. That's been quite good.

    What needs improvement?

    The UI just got a real big uplift, but behind the UI, there are quite a few different integrations that go on.

    One big improvement we would like to see would be the workflow integration of codeset mapping with the erwin source to target mapping. That's a bit clunky for us. The two often seem to be in conflict with one another. Codeset mappings that are used within the source to target mappings are difficult to manage.

    Some areas we found take time to process such as metadata scans, some of the management functions at a large scale do take time to process. That's an observation that we've worked with erwin support to a degree, but it seems that's just an inherent part of the scale of our particular project.

    Buyer's Guide
    erwin Data Intelligence by Quest
    November 2024
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
    816,636 professionals have used our research since 2012.

    For how long have I used the solution?

    We're in our second year of using DI for Data Governance.

    What do I think about the scalability of the solution?

    Erwin's latest general release has addressed performance of metadata sources having greater than 2,000 objects. Our use has 3 metadata sources each having ~ 2,400 relational objects. DIS provides good capability to organize projects and subject areas with multiple sublayers. All mappings have been set to synchronize with scanned metadata. Our solution had built over close to 2,000 mappings over 20K mapped code value pairs. So far so good, scanning and synchronizing metadata and reporting on enterprise gaps take some time to process but not unreasonable considering the work performed. 

    How are customer service and support?

    Erwin support is pretty good. We've had our struggles there and I've gone through a lot of tickets. I'd rate them an eight out of ten.

    There have been a couple of product enhancements, one of which I've not been able to get traction into and that was with regard to code set management and workflows. There's some follow-up that I have to do there. That doesn't seem to be a priority. It seems we have to have a couple of different discussions usually or deep dive to determine the problem understanding for a resolution. Sometimes that takes a little bit longer than I would like but all in all, it's pretty good.

    What about the implementation team?

    We had erwin involved in the implementation. 

    I don't think that it can be stood up quickly with minimal professional services. There's quite a bit of involvement. The integration of the solution into an environment ecosystem has challenges that take some effort especially if you're building new connectors. There's a good bit of effort in designing, preparing, planning, and building. It's pretty heavy as far as its integration effort.

    What was our ROI?

    The client is thrilled with higher quality, lower-cost products, and the services.

    What's my experience with pricing, setup cost, and licensing?

    The financial model will be different. There is the cost of this software but there are offsetting accelerations through the automation as well as cost and efficiency. Don't be afraid of automation and don't get hung up on losing revenue due to automation. What I've seen is that some financial managers resist automation that results in a reduction of labor revenue. These reductions are ideally overcome through additional engagements, improve customer satisfaction, quality, add-on support, whatever the case, automation is a good thing.

    The fact that this solution can be hosted in the cloud does not affect the total cost of ownership. The licensing cost is the same whether I use the cloud or on-prem. It may be the partner agreements but we do get some discounts and there's some negotiated pricing already in place with our companies. I didn't see that there was a difference in cloud license versus on-premise.

    What other advice do I have?

    We haven't integrated Data Catalog and Data Literacy yet. Our client is a little bit behind on being able to utilize these aspects that we've presented for additional value. 

    My advice would be to partner with an integrator. erwin has quite a few of them. If you're going to jump into this in earnest, you're going to need to have that experience and support.

    The biggest lesson I have learned is that the only limitation is the imagination. Anything is possible. There's quite a strong capability with this product. I've seen what you can come up with as far as innovative flows, processes, automation, etc. It's got quite strong capabilities. 

    The next lesson would be in regards to how automation fits within a company's framework and to embrace automation. There are some good quality points to continue with, certainly within the data cataloging, data governance, and so forth. There's quite a bit of good capability there. 

    I rate erwin Data Intelligence for Data Governance a nine out of ten. 

    Which deployment model are you using for this solution?

    Private Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Tracy Hautenen Kriel - PeerSpot reviewer
    Tracy Hautenen KrielArchitecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
    Real User

    Thanks for the great review! How do you find the interaction between the cloud instance of DIS obtaining metadata from on-prem DBMS solutions?

    reviewer2191656 - PeerSpot reviewer
    Release Train Engineer (RTE) at a pharma/biotech company with 10,001+ employees
    Real User
    Top 10
    Saves us time and reduces the number of bugs by automatically generating software
    Pros and Cons
    • "The solution saves time in data discovery and understanding our entire organization's data."
    • "The technical support could be improved."

    What is our primary use case?

    We use erwin Data Intelligence to map the data structures from the source systems to our logical data model. Based on this mapping, the tool automatically generates ETL procedures for us.

    How has it helped my organization?

    The solution saves us time and reduces the number of bugs by automatically generating software, rather than manually creating it.

    The solution saves time in data discovery and understanding our entire organization's data.

    What needs improvement?

    The technical support could be improved. When we had an issue, we were given vague answers that did not resolve the issue.

    For how long have I used the solution?

    I have been using erwin Data Intelligence for three years.

    What do I think about the stability of the solution?

    The solution is stable.

    What do I think about the scalability of the solution?

    The solution is scalable.

    How are customer service and support?

    We have Premier Support, which provides us with quick access to the support team. However, it does not accelerate the resolution of our issues. It took almost a year for us to get the impression that they were listening to us. It took another half a year for them to understand the issue, and another half a year to resolve it.

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We previously used Excel spreadsheets to do the mapping before switching to erwin Data Intelligence for the automation.

    What's my experience with pricing, setup cost, and licensing?

    The price is too high. We pay 41,000 Swiss francs for five users.

    I give the pricing a three out of ten.

    What other advice do I have?

    I give erwin Data Intelligence an eight out of ten.

    Premier Support has added minimal value to our overall investment.

    I recommend doing a POC for erwin Data Intelligence before moving forward to ensure that it meets all requirements.

    Which deployment model are you using for this solution?

    Public Cloud
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    erwin Data Intelligence by Quest
    November 2024
    Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
    816,636 professionals have used our research since 2012.
    reviewer2230059 - PeerSpot reviewer
    Project Coordinator at a computer software company with 201-500 employees
    Real User
    Top 20
    Metadata Manager enables you to immediately see all systems as well as tables, columns, and views, well laid out
    Pros and Cons
    • "Mind map... is a really good feature because it is very helpful in seeing which column's tables are related. Also, you can flag them with "sensitive data" and other indicators. You can also customize your own features for the mind map. That was another very robust feature."
    • "Really huge datasets, where the logical names or the lexicons weren't groomed or maintained well, were the only area where it really had room for improvement. A huge data set would cause erwin to crash. If there were half a million or 1 million tables, erwin would hang."

    What is our primary use case?

    The use cases were for a large federal government agency with 10 smaller agencies that handle all of the metadata for that agency, and all of that metadata is sensitive PHI or PII. It includes Social Security numbers and all of the metadata for the provider, and beneficiary health records. The purpose of the agencies is to prevent fraud, waste, and abuse of American taxpayers' money.

    One of the biggest use cases is to do mappings, manual and auto mappings, and data lineage. The data is used by the agency for prosecution when they find fraud and waste.

    At a very high level, what the Medicare or Medicaid services want is the ability to ingest their metadata so that there is transparency. They also want it to be up to date, and the ability to interpret it, both technically and for business use cases, meaning business terms, policies, and rules.

    They want end-users with different levels of technical acumen to be able to find information easily and quickly. A data store would be a quasi-technical person, like me, who understands enough to retrieve information or a lineage.

    A business user would be either a congressman or congresswoman or a project manager who would see visual representations. erwin has a lot of really good data visualization capabilities.

    The use cases include being able to quickly look at data and evaluate it on many levels, from the granular level of a column, table, or even a view, to a zoomed-out level to see how many of a certain table or column are in a data set from each agency.

    Another use case is to take the data out of legacy tools, like Excel spreadsheets. And in some cases, agencies are still using a mainframe from the 1960s where some of that data is stuck.

    What is most valuable?

    The Standard Data Connectors we used were for Snowflake, RedShift, SQL, IBM, and others. All of the standard data connectors worked. One problem that our team ran into was that some of the applications didn't really do the best job of grooming and maintaining their data. One particular system had 1 million tables, which meant there were a couple of million columns. The size of the data was an issue, but the data connectors worked. There were no APIs used, just database connectors.

    In terms of seeing the technical details needed to manage the data landscape, when you log in to erwin it's broken down into modules. One of them is Metadata Manager, and that is one of the things I liked about it. It's broken down according to the work you need to do. With Metadata Manager, you immediately see all of your systems and, in our case, The Centers for Medicare & Medicaid Services had many systems. And in the left-hand panel, there was a really good user interface to expand your systems. You can see your environments and what's in them, and then you see your tables, columns, views, and anything else.

    In the center of the UI, you can do your work, such as run a lineage, mind map, or look at an impact analysis. It's set up well visually, and it's also set up like old-school computer science with correct folders.

    Another work area module, called Mapping Manager, is where you do all of your mapping. It gives you a mirror view of everything that's in your systems and environments, and you can work with that metadata on your mappings. You can also export and publish your mappings and, once you've done your mappings, you can go back into Metadata Manager and run an impact analysis and look at your mind map.

    The third module for business users is the Business Glossary Manager where you can create your business terms, policies, and rules, assign them and see how many are spread across which environments. It gives you a visual in addition to the folder structure.

    These modules are the strength of erwin's Data Intelligence Suite. People who are non-technical can learn about data governance using this tool, like I did. The tool we're now using instead of erwin, requires too much searching and linking things, like you're using Facebook.

    What needs improvement?

    I and the DevOps architect think erwin Data Intelligence is a better product technically because it's more designed for a technical user. But it couldn't pass one penetration test. In the federal government, if there's one problem like that, they're not happy anymore.

    Also, really huge datasets, where the logical names or the lexicons weren't groomed or maintained well, were the only area where it really had room for improvement. A huge data set would cause erwin to crash. If there were half a million or 1 million tables, erwin would hang. And then, when the metadata came in, it would need a lot of manual work to clean it up.

    For how long have I used the solution?

    I used erwin Data Intelligence by Quest for a year and a half.

    What do I think about the stability of the solution?

    The stability issues were around erwin's not being able to handle that huge amount of metadata.

    How are customer service and support?

    We used their Premier Support and had weekly meetings with them to go over any tickets. There were two people assigned to us. One was their government specialist and the other was their customer service person in charge of their support.

    The biggest value of Premier Support is that you are able to verbalize feedback and get input on defects and the fact that you have an open forum. You can communicate with people face to face and collaborate. You can discuss an issue, provide input, and get things resolved.

    How would you rate customer service and support?

    Positive

    How was the initial setup?

    I was a technical and business user. The team I worked on stood the erwin Data Intelligence suite up within the MAG (Microsoft Azure for Government) environment. We put it through the penetration test and hooked it up to the LDAP with all the security requirements.

    Standing up a metadata governance platform is always going to be complex. It was complex for us because it was being used for the government and we had a lot of penetration tests and high-level cybersecurity requirements. That made it complex.

    And maintaining the system is what our team did. Our contract included getting it stood up, integrated, configured, and then ensuring it kept running. It was only available from eight in the morning until seven at night, but that was our job. We bought erwin off the shelf. We weren't working with them on customized features.

    What about the implementation team?

    Our team was the integration team and we had five people involved.

    What's my experience with pricing, setup cost, and licensing?

    erwin was at a good price. The federal government wouldn't buy something if the pricing wasn't good. We have to use FedRAMP pricing, so I'm not sure about what erwin's pricing would be "out in the wild," for a regular company. But they do work with you on the price.

    Which other solutions did I evaluate?

    The erwin interface was a good balance between technical and visual information compared to some other products that we looked at. The one that we switched to is a "glorified social" solution. It's about socializing the metadata and ways for people to search and create articles. They can also link and rate the veracity of a particular data source and write comments. 

    Whereas with erwin, you can actually do things, like create your own lineage and mind maps. That is a really good feature because it is very helpful in seeing which column's tables are related. Also, you can flag them with "sensitive data" and other indicators. You can also customize your own features for the mind map. That was another very robust feature.

    What other advice do I have?

    Faster delivery of data pipelines at less cost is more a question for the architect than for me, but it is possible if the metadata sources are clean and set up correctly. This is not an erwin-specific topic. My understanding is that a lot of data catalogs are dependent on what is called the "logical name" of the tables and columns. If the data store or the data analyst never labeled or created a correct lexicon for any of their metadata, then it's going to slow down the whole process, whether it's Erwin or Alation or Informatica or Calibra. erwin can make data pipelines faster, but it's dependent on how clean the metadata is and how well it was set up in the first place. And I believe it does save costs because the Medicare & Medicaid Services wouldn't be using it if it wasn't cost-effective.

    erwin Data Intelligence is a good platform and I wish we were still using it.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    reviewer1935849 - PeerSpot reviewer
    Architect at a insurance company with 10,001+ employees
    Real User
    Manages all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions
    Pros and Cons
    • "It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool."
    • "We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today."

    What is our primary use case?

    The Data Intelligence suite helps us manage all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions. These are all managed in this tool. There is definitely extended use of this data set, which includes using the metadata that we have built. Using that metadata, we also integrate with other ETL tools to pull the metadata and use it in our data transformations. It is very tightly coupled with our data processing in general. We also use erwin Data Modeler, which helps us build our metadata, business definitions, and the physical data model of the data structures that we have to manage. These two tools work hand-in-hand to manage our data governance metadata capabilities and support many business processes.

    I manage the data architecture plus manage the whole data governance team who designed the data pipelines. We designed the overall data infrastructure plus the actual governance processes. The stewards, who work with the data in the business, set up the metadata and manage this tool everyday end-to-end.

    How has it helped my organization?

    The benefit of the solution was the adoption of a lot of business partners using and leveraging our data through our governance processes. We have matrices of how many users have been capturing and using it. We have data consultants and other data governance teams who are set up to review these processes and ensure that nobody is really bypassing them. We use this tool in the middle of our work processes for utilization of data on the tail-end, letting the business do self-service, and build our own IT things.

    When we manage our data processes, we know that there are some upward sources or downstream systems. We know that they could be impacted based on some changes coming in from the source or some related to the lineage and impact analysis that this tool brings to the table. We have been able to identify system changes which could impact all downstream systems. That is a big plus because IT and production support teams are now able to use this tool to identify the impact of any issues with the data or any data quality gaps. They can notify all the recipients upfront with the product business communications of any impacts.

    For any company mature enough to have implemented any of these data governance rules or principles, these are the building blocks of the actual process. The criticality is such because we want the business to self-service. We can build data lakes or data warehouses using our data pipelines, but if nobody can actually use the data to be able to see what information they have available without going through IT sources, that defeats the whole purpose of doing this additional work. It is a data platform that allows any business process to come in and be self-service, building their own processes without a lot of IT dependencies.

    There is a data science function where a lot of critical operational reporting can be done. Users leverage this tool to be able to discover what information is available, and it's very heavily used.

    If we start capturing additional data about some metadata, then we can define our own user-defined attributes, which we can then start capturing. It does provide all the information that we want to manage. For our own processes, we have some special tags that we have been able to configure quickly through this tool to start capturing that information.

    We have our own homegrown solutions built around the data that we are capturing in the tool. We build our own pipelines and have our own homegrown ETL tools built using Spark and cloud-based ecosystems. We capture all the metadata in this tool and all the transformation business rules are captured there too. We have API-level interfaces built into the tool to pull the data at the runtime. We then use that information to build our pipelines.

    This tool allows us to bring in any data stewards in the business area to use this tool and set up the metadata, so we don't have to spend a lot of time in IT understanding all the data transformation rules. The business can set up the business metadata, and once it is set up, IT can then use the metadata directly, which feeds into our ETL tool.

    Impact analysis is a huge benefit because it gives us access to our pipeline and data mapping. It captures the source systems from which the data came. For each source system, there is good lineage so we can identify where it came from. Then, it is loaded into our clean zone and data warehouse, where I have reports, data extracts, API calls, and the web application layer. This provides access to all the interfaces and how information has been consumed. Impact analysis, at an IT and field levels, lets me determine:

    • What kind of business rules are applied. 
    • How data has been transformed from each stage. 
    • How the data is consumed and moved to different data marts or reporting layers. 

    Our visibility is now huge, creating a good IT and business process. With confidence, they can assess where the information is, who is using it, and what applications are impacted if that information is not available, inaccurate, or if there are any issues at the source. That impact analysis part is a very strong use case of this tool.

    What is most valuable?

    The most critical features are the metadata management and data mapping, which includes the reference data management and code set management. Its capabilities allow us to capture metadata plus use it to define how the data lineage should be built, i.e., the data mapping aspects of it. The data mapping component is a little unique to this tool, as it allows the entire data lineage and impact analysis to be easily done. It has very good visuals, which it displays in a build to show the data lineage for all the metadata that we are capturing.

    Our physical data mapping is using this tool. The component of capturing the metadata, integrating the code set managers and reference data management aspects of it with the data pipeline are unique to this tool. They are definitely the key differentiators that we were looking for when picking this tool.

    erwin DI provides visibility into our organization’s data for our IT, data governance, and business users. There is a business-facing view of the data. There is an IT version of the tool that allows us to set up the metadata managed by our IT users or data stewards, who are users of the data, to set up the metadata. Then, the same tool has a very good business portal that takes the same information in a read-only way and presents it back in a very business-user friendly way. We call it a business portal. This suite of applications provides us end-to-end data governance from both the IT's and business users' perspective.

    It is a central place for everybody to start any ETL data pipeline builds. This tool is being heavily used, plus it's heavily integrated with all the ETL data pipeline design and build processes. Nobody can bypass these processes and do something without going through this tool.

    The business portal allows us to search the metadata and do data discovery. Business users come in and present data catalog-type information. This means all the metadata that we capture, such as AI masking, dictionaries, and the data dictionary, is set up as well. That aspect is very heavily used.

    There are a lot of Data Connectors that gather the data from all different source systems, like metadata from many data stores. We configure those Data Collectors, then install them. The Data Connector that helps us load all the metadata from the erwin Data Modeler tool is XML-based.

    The solution delivers up-to-date and detailed data lineage. It provides you all the business rules that data fields are going through by using visualization. It provides very good visualization, allowing us to quickly assess the impact in an understandable way.

    All the metadata and business glossaries are captured right there in the tool. All of these data points are discoverable, so we can search through them. Once you know the business attribute you are looking for, then you are able to find where in the data warehouse this information lives. It provides you technical lineage right from the business glossary. It provides a data discovery feature, so you are able to do a complete discovery on your own.

    What needs improvement?

    The data quality has so many facets, but we are definitely not using the core data quality features of this tool. The data quality has definitely improved because the core data stewards, data engineers, data stewards, and business sponsors know what data they are looking for and how the data should move. They are setting up those rules. We still need another layer of data quality assessments on the source to see if it is sending us the wrong data or if there are some issues with the source data. For those things, we need a rule-based data quality assessment or scoring where we can assess tools or other technology stacks. We need to be able to leverage where the business comes in, defining some business rules and have the ability to execute those rules, then score the data quality of all those attributes. Data quality is definitely not what we are leveraging from this tool, as of today.

    For how long have I used the solution?

    I have been using it for four or five years.

    What do I think about the stability of the solution?

    We had a couple of issues here and there, but nothing drastic. There has been a lot of adoption of the tool increasing data usage. There have been a few issues with this, but not blackout-type issues, and we were able to recover. 

    There were some stability issues in the very beginning. Things are getting better with its community piece.

    What do I think about the scalability of the solution?

    Scalability has room for improvement. It tends to slow down when we have large volumes of data, and it takes more time. They could scale better, as we have seen some degradation in performance when we work with large data sets.

    How are customer service and support?

    We have some open tickets with them from time to time. They have definitely promptly responded and provided solutions. There have been no issues.

    Support has changed hands many times, though we always land on a good support model. I would rate the technical support as seven out of 10.

    They cannot just custom build solutions for us. These are things that they will deliver and add to releases. 

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    We were previously using Collibra and Talend data management. We switched this tool to help us build our data mapping, not just field-level mapping. There are also aspects of code set management, where we are translating different codes that we are standardizing to enterprise codes. With the reference data management aspects of it, we can build our own data sets within the tool and that data set is also integrated with our data pipeline.

    We were definitely not sticking with the Talend tool because it increased our delivery time for data. When we were looking for other platforms, we needed a tool that captured data mapping in a way that a systematic program could actually read and understand it, then generate the dynamic code for an ETL processor pipeline.

    How was the initial setup?

    It was through AWS. The package was very easy to install. 

    What was our ROI?

    If I use a traditional ETL tool and build it through an IT port, it would take five days to build very simple data mapping to get it to the deployment phase. Using this solution, the IT cost will be cut down to less than a day. Since the business requirements are now captured directly in the tool, I don't need IT support to execute it. The only part being executed and deployed from the metadata is my ETL code, which is the information that the business will capture. So, we can build data pipelines at a very rapid rate with a lot of accuracy. 

    During maintenance times, when things are changing and updating, businesses will not have access to their ETL tool, code, and the rules executed in the code. However, using this tool with its data governance and data mapping, the data captured is what actually it will be. The rules are first defined, then they are fed into the ETL process. This is done weekly because we dynamically generate the ETL from our business users' mapping. That definitely is a big advantage. Our data will never be off the rules that the business has set up.

    If people cannot do discovery on their own, then you will be adding a lot of resource power, i.e., manpower, to support the business usage of the data. A lot of money is saved because we can run a very lean shop and don't have to onboard a lot of resources. This saves a lot on manpower costs as well.

    What's my experience with pricing, setup cost, and licensing?

    The licensing cost was very affordable at the time of purchase. It has since been taken over by erwin, then Quest. The tool has gotten a bit more costly, but they are adding more features very quickly. 

    Which other solutions did I evaluate?

    We did a couple of demos with data catalog-type tools, but they didn't have the complete package that we were looking for.

    What other advice do I have?

    Our only systematic process for refreshing metadata is from the erwin Data Modeler tool. Whenever those updates are done, we then have a systematic way to update the metadata in our reference tool.

    I would rate the product as eight out of 10. It is a good tool with a lot of good features. We have a whole laundry list of things that we are still looking for, which we have shared with them, e.g., improving stability and the product's overall health. The cost is going up, but it provides us all the information that we need. The basic building blocks of our governance are tightly coupled with this tool.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Amazon Web Services (AWS)
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Business Intelligence BA at a insurance company with 10,001+ employees
    Real User
    Good traceability and lineage features, impact analysis is helpful in the decision-making process, and the support is good
    Pros and Cons
    • "Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream."
    • "There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool."

    What is our primary use case?

    Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents.

    As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.

    How has it helped my organization?

    The automated data lineage is very useful. We used to work in Excel, and there is no way to trace the lineage of the data. Since we started working with DI, we have been able to quickly trace the lineage, as well as do an impact analysis.

    We do not use the ETL functionality. I do know, however, that there is a feature that allows you to export your mapping into Informatica.

    Using this product has improved our process in several ways. When we were using Excel, we did not know for sure that what was entered in the database was what had been entered into Excel. One of the reasons for this is that Excel documents contain a lot of typos. Often, we don't know the data type or the data length, and these are some of the reasons that lineage and traceability are important. Prior to this, it was zero. Now, because we're able to create metadata from our databases, it's easier for us to create mappings. As a result, the typos virtually disappeared because we just drag-and-drop each field instead of typing it. 

    Another important thing is that with Excel, it is too cumbersome or next to impossible to document the source path for XSD files. With DI, since we're able to model it in the tool, we can drag and drop and we don't have to type the source path. It's automatic.

    This tool has taken us from having nothing to being very efficient. It's really hard to compare because we have never had these features before.

    The data pipeline definitely improved the speed of analysis in our use case. We have not timed it but having the lineage, and being able to just click, makes it easier and faster. We believe that we are the envy of other departments that are not using DI. For them to conduct an impact analysis takes perhaps a few minutes or even a few hours, whereas, for us, it takes less than one minute to complete.

    We have automated parts of our data management infrastructure and it has had a positive effect on our quality and speed of delivery. We have a template that the system uses to create SQL code for us. The code handles the moving of data and if they are direct move fields, it means that we don't need a person to code this operation. Instead, we just run the template.

    The automation that we use is isolated and not for everything, but it affects our cost and risk in a positive way because it works efficiently to produce code.

    It is reasonable to say that DI's generation of production code through automated code engineering reduces the cost from initial concept to implementation. However, it is only a small percentage of our usage.

    With respect to the transparency and accuracy of data movement and data integration, this solution has had a positive impact on our process. If we bring a new source system into the data warehouse and the interconnection between that system and us is through XML then it's easier for us to start the mapping in DI. It is both efficient and effective. Downstream, things are more efficient as well. It used to take days for the BAs to do the mapping and now, it probably takes less than one hour.

    We have tried the AIMatch feature a couple of times, and it was okay. It is intended to help automatically discover relationships and associations in data and I found that it was positive, albeit more relevant to the data governance team, of which I am not part. I think that it is a feature in its infancy and there is a lot of room for improvement.

    Overall, DI's data cataloging, data literacy, and automation have helped our decision-makers because when a source wants to change something, we immediately know what the impact is going to be downstream. For example, if a source were to say "Okay, we're no longer going to send this field to you," then immediately we will know what the impact downstream will be. In response, either we can inform upstream to hold off on making changes, or we can inform the departments that will be impacted. That in itself has a lot of value.

    What is most valuable?

    The most valuable features are lineage and impact analysis. In our use case, we deal with data transformations from multiple sources into our data warehouse. As part of this process, we need traceability of the fields, either from the source or from the presentation layer. If something is changing then it will help us to determine the full impact of the modifications. Similarly, if we need to know where a specific field in the presentation layer is coming from, we can trace it back to its location in the source.

    The feature used to fill metadata is very useful for us because we can replicate the data into our analytics as metadata.

    What needs improvement?

    Improvement is required for the AIMatch feature, which is supposed to help automatically discover relationships in data. It is a feature that is in its infancy and I have not used it more than a few times.

    There is room for improvement with the data cataloging capability. Right now, there is a list of a lot of sources that they can catalog, or they can create metadata upon, but if they can add more then that would be a good plus for this tool. The reason we need this functionality is that we don't use the modeling tool that erwin has. Instead, we use a tool called Power Viewer. Both erwin and Power View can create XSD files but you cannot import a file created by Power Viewer into erwin. If they were more compatible with Power Viewer and other data modeling solutions, it would be a plus. As it is now, if we have a data model exported into XSD format from Power Viewer, it's really hard or next to impossible to import into DI.

    We have a lot of projects and a large number of users, and one feature that is missing is being able to assign users to groups. For example, it would be nice to have IDs such that all of the users from finance have the same one. This would make it much easier to manage the accounts.

    For how long have I used the solution?

    We have been using erwin Data Intelligence (DI) for Data Governance since 2013.

    What do I think about the stability of the solution?

    The stability of DI has come a long way. Now, it's very stable. If I were rating it six years ago, my assessment would definitely have been different. At this time, however, I have no complaints.

    What do I think about the scalability of the solution?

    We have the enterprise version and we can add as many projects as we need to. It would be helpful if we had a feature to keep better track of the users, such as a group membership field.

    We are the only department in the organization that uses this product. This is because, in our department, we handle data warehousing, and mapping documentation is very important. It is like a bible to us and without it, we cannot function properly. We use it very extensively and other departments are now considering it.

    In terms of roles, we have BAs with read-write access. We also have power users, who are the ones that work with the data catalog, create the projects, and make sure that the metadata is all up-to-date. Maintenance of this type also ensures that metadata is removed when it is no longer in use. We have QA/Dev roles that are read-only. These people read the mapping and translate it into code, or do QA on it. Finally, we have an audit role, where the users have read-only access to everything.

    One of the tips that I have for users is that if there are a lot of mapping documents, for example, more than a few hundred rows for a few hundred records, it's easier to download it, do it in Excel, and upload it again.

    All roles considered, we have between 30 and 40 users.

    How are customer service and technical support?

    The technical support is good.

    When erwin took over this product from the previous company, the support improved. The previous company was not as large and as such, erwin is more structured and has processes in place. For example, if we report issues, erwin has its own portal. We also have a specific channel to go through, whereas previously, we contacted support through our account manager.

    Which solution did I use previously and why did I switch?

    Other than what we were doing with Excel, we were not using another solution prior to this one.

    How was the initial setup?

    We have set up this product multiple times. The first setup was very challenging, but that was before erwin inherited or bought this product from the original developer. When erwin took over, there were lots of improvements made. As it is now, the initial setup is not complex and is no longer an issue. However, when we first started in 2013, it was a different story.

    When we first deployed, close to 10 years ago, we were new to the product and we had a lot of challenges. It is now fairly easy to do and moreover, erwin has good support if we run into any trouble. I don't recall exactly how long it took to initially deploy, but I would estimate a full day. Nowadays, given our experience and what we know, it would take less than half a day. Perhaps one or two hours would be sufficient.

    The actual deployment of the tool itself has no value because it's not a transactional system. With a transactional system, for example, I can do things like point of sale. In the case of this product, BAs create the mappings. That said, once it's deployed, the BAs can begin working to create mappings. Immediately, we can perform data cataloging, and given the correct connections, for example to Oracle, we can begin to use the tool right away. In that sense, there is a good time-to-value and it requires minimal support to get everything running.

    We have an enterprise version, so if a new department wants to use it then we don't need to install it again. It is deployed on a single system and we give access to other departments, as required. As far as installing the software on a new machine, we have a rough plan that we follow but it is not a formal one that is written down or optimized for efficiency.

    What about the implementation team?

    We had support from our reseller during the initial setup but they were not on-site.

    Maintenance is done in-house and we have at least three people who are responsible. Because of our company structure, there is one who handles the application or web server. A second person is responsible for AWS, and finally, there is somebody like me on the administrative side.

    What was our ROI?

    We used to calculate ROI several years ago but are no longer concerned with it. This product is very effective and it has made our jobs easier, which is a good return.

    What's my experience with pricing, setup cost, and licensing?

    We operate on a yearly subscription and because it is an enterprise license we only have one. It is not dependent on the number of users. This product is not expensive compared to the other ones on the market.

    We did not buy the full DI, so the Business Glossary costs us extra. As such, we receive two bills from erwin every year.

    Which other solutions did I evaluate?

    We evaluated Informatica but after we completed a cost-benefit analysis, we opted to not move forward with it.

    What other advice do I have?

    My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market.

    I would rate this solution a nine out of ten.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    reviewer2253402 - PeerSpot reviewer
    Delivery Director at a computer software company with 1,001-5,000 employees
    Real User
    Top 5
    A valuable data mapping manager, and good data transparency, but requires better integration for data quality
    Pros and Cons
    • "The data mapping manager is the most valuable feature."
    • "The data quality assessment requires third-party components and a separate license."

    What is our primary use case?

    We are a consultant for erwin Data Intelligence by Quest and provide the service to our customers for data access discovery.

    How has it helped my organization?

    The standard data connectors for automation, metadata, harvesting, and ingestion are easy to use.

    The solution enables us to deliver data pipelines faster and with a 20 to 30 percent reduction in cost.

    erwin provides an immediate view of the technical details required to manage our data landscape. Data transparency is increasing which helps our IT operations.

    It delivers up-to-date and detailed data lineage which is important.

    erwin helps us with data discovery and understanding of our entire organization's data.

    The solution provides visibility into our organization's data for our IT, data governance, and business users which we require to build the compound report.

    The asset discovery and collaboration provided by the data quality feature is good.

    Its ability to affect data users' confidence levels when they are utilizing data is admirable.

    erwin's capacity to tackle challenges associated with data quality and offer the required information for users to make well-informed decisions is effective in overseeing the regional database and metadata.

    What is most valuable?

    The data mapping manager is the most valuable feature.

    What needs improvement?

    The data quality assessment requires third-party components and a separate license. 

    I would like to have better integration around the data quality.

    I would appreciate the inclusion of a non-structured database feature in a future release.

    For how long have I used the solution?

    I am currently an IT user for erwin Data Intelligence by Quest. 

    What do I think about the stability of the solution?

    The solution is stable.

    What do I think about the scalability of the solution?

    We might encounter problems with disaster recovery while attempting to scale.

    How are customer service and support?

    The response time for technical support is slow.

    How would you rate customer service and support?

    Neutral

    Which solution did I use previously and why did I switch?

    Compared to Informatica, erwin Data Intelligence by Quest is more integrated and has a better UI, but Informatica has a larger product line geared toward enterprise business. erwin Data Intelligence by Quest is focused on two areas.

    How was the initial setup?

    The initial setup is straightforward. Before deployment, we engaged in discussions with the data owner and the DBA team. Subsequently, we communicated with the business users, who constitute the general consumer base. We assisted them in defining the business terms through a workshop. 

    The deployment took six months and required five people to complete.

    What about the implementation team?

    We had help from the vendor during the implementation. We are satisfied with their help.

    What's my experience with pricing, setup cost, and licensing?

    The price is reasonable, and a subscription is required.

    What other advice do I have?

    I would rate erwin Data Intelligence by Quest seven out of ten.

    Before implementing erwin Data Intelligence by Quest, potential users should first determine their use case.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
    PeerSpot user
    reviewer1270386 - PeerSpot reviewer
    Solution Architect at a pharma/biotech company with 10,001+ employees
    Real User
    Has the ability to run automation scripts against metadata and metadata mappings
    Pros and Cons
    • "The possibility to write automation scripts is the biggest benefit for us. We have several products with metadata and metadata mapping capabilities. The big difference when we were choosing this product was the ability to run automation scripts against metadata and metadata mappings. Right now, we have a very high level of automation based on these automation scripts, so it's really the core feature for us."
    • "The SDK behind this entire product needs improvement. The company really should focus more on this because we were finding some inconsistencies on the LDK level. Everything worked fine from the UI perspective, but when we started doing some deep automation scripts going through multiple API calls inside the tool, then only some pieces of it work or it would not return the exact data it was supposed to do."

    What is our primary use case?

    The three big areas that we use it for right now: metadata management as a whole, versioning of metadata, and metadata mappings and automation. We have started to adopt data profiling from this tool, but it is an ongoing process. I will be adding these capabilities to my team probably in Q1 of this year.

    How has it helped my organization?

    It is improving just a small piece of our company. We are an extremely big company. Implementing this to the company, there is probably a zero percent adoption rate because I think it is only implemented in the development team of our platform. 

    If you look at this from the perspective of the platform that we are delivering, the adoption rate is around 90 percent because almost every area and step somehow touches the tool. We, as a program, are delivering a data-oriented platform, and erwin DI is helping us build that for our customers. 

    The tool is not like Outlook where everyone in the company really uses it or SharePoint that is company-wide. We are using this in our program as a tool to help my technical analysts, data modelers, developers, etc.

    What is most valuable?

    The possibility to write automation scripts is the biggest benefit for us. We have several products with metadata and metadata mapping capabilities. The big difference when we were choosing this product was the ability to run automation scripts against metadata and metadata mappings. Right now, we have a very high level of automation based on these automation scripts, so it's really the core feature for us.

    I'm working as a solution architect in one of the biggest projects and we really need to deliver quickly. The natural thing was that we went through the automation and started adopting some small pieces. Now, we have all our software development processes built around the automation capabilities. I can estimate that we lowered our time to market by 70 percent right now using these automation scripts, which is a really big thing.

    The second best feature that we are heavily using in our project is the capability to create the mappings and treat them as a documentation. This has shown us the mappings to the different stakeholders, have some reviews, etc. Having this in one product is very nice.

    What needs improvement?

    The SDK behind this entire product needs improvement. The company really should focus more on this because we were finding some inconsistencies on the LDK level. Everything worked fine from the UI perspective, but when we started doing some deep automation scripts going through multiple API calls inside the tool, then only some pieces of it work or it would not return the exact data it was supposed to do. This is the number one area for improvement.

    The tool provides the WSDL API as another point to access the data. This is the same story as with the SDK. We are heavily using this API and are finding some inconsistencies in its responses, especially as we are going for more nonstandard features inside. The team has been fixing this for us, so we have some support. This was probably overlooked by the product team to focus more on the UI rather than on the API.

    For how long have I used the solution?

    We have been using the product for two and a half years.

    What do I think about the stability of the solution?

    There have been no issues with the stability from the erwin DI platform. We haven't encountered any problems for the last two and a half years.

    It is maintained by another team. erwin is maintained by the team who generally maintains our platform. However, the effort is close to zero because there is nothing happening. Hold the backups and everything is automated by default on our shared platforms on which it is installed. 

    What do I think about the scalability of the solution?

    It is a Java-based platform. So, if there would be some issues with the performance of this platform, we would probably migrate this to a bigger server. Therefore, it can scale. 

    It does not have fancy cloud scaling tools capabilities, but we don't need this. For this type of tool and deployment, it's sufficient.

    We have around 40 users. All the roles are very different because half of the developers work with different technologies. One-fourth of the users are technical analysts. The rest of the users are data modelers.

    How are customer service and technical support?

    We have used the technical support several times. It's really different based on the complexity of the task. Usually, they meet their SLAs for fixes and changes in the required support time.

    Which solution did I use previously and why did I switch?

    We did not previously use another product.

    We used this product even before it was bought by erwin. Before, it was a company called AnalytiX DS. Then, after two years ago, erwin bought this company and their product, doing some rebranding. So, we started using this product as version 8.0, then it was migrated to version 8.3. Now, we are using version 9.0. We went through a few versions of this product.

    How was the initial setup?

    The initial setup was not so simple, but it wasn't hard. If it would be between a one and five, with one being easy and five being hard, I would put it at a two. 

    It was a new tool with new features. It had to be installed on-premise. Therefore, we struggled a bit with it. We were using it for quite a complex task, so we needed it to go through areas that would be potentially supported with the tool. The work associated with this initial setup to define that was not so easy, just to go through everything. 

    Some companies have an initial packet that they show you everything in a very structured way. When we were implementing this, we really needed to discover what we needed rather than be given the documentation showing that this is here, this can contribute to your use case, and so on. We needed a lot of effort from our side. In comparison, I'm leading some other PoCs right now with other vendors in different areas. Those vendors contribute highly to me being capable to assess their tools, install and use them. 

    The deployment took two days and was nothing special. It was just a simple Java application with a back-end database.

    Migrating my team to use this tool properly, do some training, putting some capabilities so does people have some reason to use the tool, that took us around three months. Because we are using this for automation, the automation is an ongoing process lasting continuously for these two and a half years because we are adopting and changing to the new requirements. So, it's like continuous improvement and continuous delivery here.

    What about the implementation team?

    I was involved from the very beginning of the PoC, actively checking the very basic capabilities. Then, I designed how we would use it, leading the whole automation stream around this tool. So, I was involved from the very beginning to the full implementation.

    It took us around three months to introduce this tool.

    What was our ROI?

    If you count that it takes 70 percent less time to deliver and multiply this by 40 people who work around the development process, this is a big time savings that we can use for more development. From my perspective, there is a very big return on investment for this tool.

    What's my experience with pricing, setup cost, and licensing?

    The licensing cost is around $7,000 for user. This is an estimation. 

    There is an additional fee for the server maintenance.

    Which other solutions did I evaluate?

    We evaluated four products and chose erwin. None of the competitors had this out-of-the-box automation feature. This was the biggest thing for me because we were looking for a tool which would allow us to do big scale automation. When I was searching for this tool, my responsibility was to find a tool that could be used in our development process and core automation product. We built the whole development lifecycle and everything. In our platform, we are doing some development around automation capabilities. Usually people have a manual process and they automate some parts of it, we went the other way. We were searching for automation capabilities and built our entire process around its capabilities to use them as much as we could. The key differentiator straight from the very beginning was the automation capability.

    Other competitors were showing us that they had an API and we could use that to automate somewhere else. Automating somewhere else means to me that I need to create some other platform, server, etc., then maintain it with some other resources to just make it run. This was really not enough for us. In addition, erwin already had some written automation templates on the PoC level which showed us that they had something that worked. 

    At the PoC level, erwin was able to convince the customer (us) that this is the automation, this is how it runs, and you can use it almost straightaway.

    What other advice do I have?

    I learned how to automate in the data area and how this is very different from any CI/CD development platforms that I was working on before. I learned that we need totally different things to automate properly in the data area. We need very accurate metadata. We need precise mappings reviewed by different data stakeholders. 

    I would rate this product as an eight (out of 10). I can imagine some capabilities for this product that would make it even better.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    reviewer1328286 - PeerSpot reviewer
    Data Program Manager at a non-tech company with 201-500 employees
    Real User
    Wide range of widgets enables a user to find information quickly. However, the configuration and structuring of this solution is not straightforward.
    Pros and Cons
    • "There is a wide range of widgets that enables the user to find the proper information quickly. The presentation of information is something very valuable."
    • "If we are talking about the business side of the product, maybe the Data Literacy could be made a bit simpler. You have to put your hands on it, so there is room for improvement."

    What is our primary use case?

    This solution is still an experiment for us. My company is in the process of defining the data governance process, which is not settled right now. We have used erwin DG for the purpose of getting acquainted with data governance from a technical point of view. We want to see how it fits in our organization because data governance is neither IT nor a business matter. It is in-between. We have to put the proper organization in place in order for an IT solution to meet all the requirements. This has been in the works for almost two years now, where we have been formerly under an experiment with erwin DG.

    We are not fully using it as we would if it were in production running regular operations. What we have done with the tool is have a metamodel for our data and try to see how it fits with the requirements of our project, businesses, and IT. We have two cases that are fully documented under erwin DG. What we are trying to do right now is to integrate all our regulatory obligations, including laws and regulations at the French and European levels. This would enable us to make a bridge between the businesses and the law.

    This is a SaaS solution maintained by erwin.

    How has it helped my organization?

    This type of solution was key to moving our entire company in the right direction by getting everyone to think about data governance.

    What is most valuable?

    It is very complete. Whatever you need, you can find it. While the presentation of results can be a bit confusing at first, there is a wide range of widgets that enables the user to find the proper information quickly. The presentation of information is something very valuable.

    The direct integration of processes and organization into the tool is something very nice. We feel there is a lot potential for us in this. Although we have not configured it yet, this product could bridge the space between business and IT, then a lot of processes related to data governance to be handled through the tool. This gives it that all in one aspect which shows high potential.

    The mapping is good. 

    What needs improvement?

    If we are talking about the business side of the product, maybe the Data Literacy could be made a bit simpler. You have to put your hands on it, so there is room for improvement.

    For how long have I used the solution?

    We have been using it for two years (July 2018).

    What do I think about the stability of the solution?

    The stability is very good.

    What do I think about the scalability of the solution?

    As far as I can see, it is scalable.

    We have approximately 10 people, so we are starting to use it on a small scale. We have data governance people, myself, a colleague in IT, four or five business users, and a data architect.

    How are customer service and technical support?

    Their support is very good. They have very good technical expertise of the product. 

    Which solution did I use previously and why did I switch?

    We previously used Excel. We switched to erwin DG because it had the best benefit-cost ratio and showed a lot of potential.

    How was the initial setup?

    The initial setup was very straightforward. However, if we are talking about the opening of the service and setting up our metadata model, it was not straightforward at all.  

    The initial deployment took less than two weeks.

    Our implementation strategy is small in scope because we are still in the experimentation phase. We just provide a few users with access for people involved in the implementation. We just let them play with it. Now, we are just adding new use cases to the model.

    What about the implementation team?

    We used erwin's consultants. We would not have been able to do the initial deployment without them. They did the deployment with two people (a technical person and a consultant), though they were not full-time. 

    The opening up of the service by erwin was extremely simple and flawless. It is just that you find yourself confronted with an empty shell and you will have to fill that shell. The configuration and structuring of this is not straightforward at all. This requires modeling and is not accessible to everyone in the company.

    What was our ROI?

    As an experimentation, we are not fully in production. Therefore, it's absolutely impossible to have a return on investment right now.

    What's my experience with pricing, setup cost, and licensing?

    erwin is cheaper than other solutions and this should appeal to other buyers. It has a good price tag.

    Which other solutions did I evaluate?

    We are a public company who is obligated to open our purchasing to a wide range of providers. For example, we were in touch with Collibra, Informatica, and a few others.

    erwin DG was less complex at first sight and cheaper than other solutions. It also fulfilled what we wanted 100 percent and was the right fit for the maturity of our governance process. It was not too big or small; it was in-between. 

    What other advice do I have?

    erwin is very good for companies who have a very structured, data governance process. It puts every possible tool around a company's data. This is very good for companies who are mature with their data. However, if a company is just looking for a tool to showcase their data in a data catalog, then I would advise companies to be careful because erwin is sometimes really complex to master and configure. Once it is set up, you have to put your hands in the gears of the software to model how your data works. It is more of a company process modeler than a directory of all data available you need and can access. Industrial companies over 30 to 40 years in age are struggling to find what data they may have and it may prove to be difficult for them to use erwin directly.

    What we have done with the lineage is valuable, but manual. For the IT dictionary, automation is possible. However, we have not installed the plugin that allows us to do this. Right now, all the data that we have configured for the lineage has been inputted by hand. I would rate this feature as average.

    We have not tested the automation.

    I would rate this solution as seven (out of 10) since we have not experienced all the functionalities of the product yet.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Other
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    PeerSpot user
    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.
    Updated: November 2024
    Product Categories
    Data Governance
    Buyer's Guide
    Download our free erwin Data Intelligence by Quest Report and get advice and tips from experienced pros sharing their opinions.