Delivery Director at a computer software company with 1,001-5,000 employees
Real User
Top 5
2023-08-09T07:32:00Z
Aug 9, 2023
I would rate erwin Data Intelligence by Quest seven out of ten. Before implementing erwin Data Intelligence by Quest, potential users should first determine their use case.
Project Coordinator at a computer software company with 201-500 employees
Real User
Top 20
2023-07-07T20:05:00Z
Jul 7, 2023
Faster delivery of data pipelines at less cost is more a question for the architect than for me, but it is possible if the metadata sources are clean and set up correctly. This is not an erwin-specific topic. My understanding is that a lot of data catalogs are dependent on what is called the "logical name" of the tables and columns. If the data store or the data analyst never labeled or created a correct lexicon for any of their metadata, then it's going to slow down the whole process, whether it's Erwin or Alation or Informatica or Calibra. erwin can make data pipelines faster, but it's dependent on how clean the metadata is and how well it was set up in the first place. And I believe it does save costs because the Medicare & Medicaid Services wouldn't be using it if it wasn't cost-effective. erwin Data Intelligence is a good platform and I wish we were still using it.
I rate erwin Data Intelligence an eight out of ten. Before implementing, you should adequately define the processes behind this tool. You need to understand the correct way to gather document metadata, set up a project in the mapping, and define the automation. If you do not have the processes sorted out, you will still have a map but won't realize all the benefits. It's all about standardization, so you can have the metadata in place. It's the same with automation. You need to understand what kinds of automation you need so you can implement it and deploy the necessary resources.
I give erwin Data Intelligence an eight out of ten. Premier Support has added minimal value to our overall investment. I recommend doing a POC for erwin Data Intelligence before moving forward to ensure that it meets all requirements.
Works at a insurance company with 5,001-10,000 employees
Real User
2022-12-18T06:56:00Z
Dec 18, 2022
We had a very specific use case, and it definitely met our needs. Therefore, on a scale from one to ten, I would rate erwin Data Intelligence by Quest a nine.
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2021-08-12T12:55:07Z
Aug 12, 2021
My advice is to consider the advantages of one-stop shopping - do you like going to 5 different stores to get what you need?
Having all of the information to support your work and Enterprise in a single location saves both time and money in the project, data design, development, and data usage through improved data intelligence.
Consider that with erwin DI, you can have reference data, business glossary, data models, data asset information, data mapping, data lineage, impact analysis, and data quality metrics together in one place, one software that is intuitive and easy to navigate. In my experience, consolidation of information has overcome many barriers and improved project success.
Architect at a insurance company with 10,001+ employees
Real User
2022-08-10T05:48:00Z
Aug 10, 2022
Our only systematic process for refreshing metadata is from the erwin Data Modeler tool. Whenever those updates are done, we then have a systematic way to update the metadata in our reference tool. I would rate the product as eight out of 10. It is a good tool with a lot of good features. We have a whole laundry list of things that we are still looking for, which we have shared with them, e.g., improving stability and the product's overall health. The cost is going up, but it provides us all the information that we need. The basic building blocks of our governance are tightly coupled with this tool.
Business Intelligence BA at a insurance company with 10,001+ employees
Real User
2021-05-24T17:09:00Z
May 24, 2021
My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market. I would rate this solution a nine out of ten.
Senior Director at a retailer with 10,001+ employees
Real User
2021-05-16T04:21:00Z
May 16, 2021
We are not using erwin's AI Match feature to automatically discover and suggest relationships and associations between business terms and physical metadata. We are still trying to get all of our data completely mapped in there. After that, we will get to the next level of maturity, which would be basically leveraging in some of the additional features such as AI Match. Similarly, we have not used the feature for generating the production code through automated code engineering. Currently, we are primarily doing the automation by using Smart Data Connectors to build some data lineages, which is helping with the overall understanding of the data flow. Over the next few months, as it gets more and more updated, we might see some benefits in this area. I would expect at least 25% savings in time. It provides a real-time understandable data pipeline to some level. Advanced users can completely understand its real-time data pipeline. Average users may not be able to understand it. Any organization that is looking into implementing this type of solution should look at its data literacy and maturity in terms of data literacy. This is where I really see the big challenge. It is almost like a change management exercise to make sure people understand how to use the data and build some of the processes around the data governance. The maturity of the organization is really critical, and you should make your plans accordingly to implement it. The biggest lesson that I have learned from using this solution is probably around how to maintain the data dictionary, which is really critical for enabling data literacy. A lot of times, companies don't have these data dictionaries built. Building the data dictionary and getting it populated into the data catalog is where we spend some of the time. A development process needs to be established to create this data dictionary and maintain it going forward. You have to just make sure that it is not a one-time exercise. It is a continuous process that should be included as part of the development process. I would rate erwin Data Intelligence for Data Governance an eight out of 10. If they can improve its user interface, it will be a great product.
We haven't integrated Data Catalog and Data Literacy yet. Our client is a little bit behind on being able to utilize these aspects that we've presented for additional value. My advice would be to partner with an integrator. erwin has quite a few of them. If you're going to jump into this in earnest, you're going to need to have that experience and support. The biggest lesson I have learned is that the only limitation is the imagination. Anything is possible. There's quite a strong capability with this product. I've seen what you can come up with as far as innovative flows, processes, automation, etc. It's got quite strong capabilities. The next lesson would be in regards to how automation fits within a company's framework and to embrace automation. There are some good quality points to continue with, certainly within the data cataloging, data governance, and so forth. There's quite a bit of good capability there. I rate erwin Data Intelligence for Data Governance a nine out of ten.
Practice Director - Digital & Analytics Practice at HCL Technologies
Real User
2020-10-14T06:37:00Z
Oct 14, 2020
It is a different experience. Collaboration and communication are very important when you want to harvest the value from the humongous amount of data that you have in your organization. All these aspects are soft aspects, but are very important when it comes to getting value from data. Data pipelines are really important because of the kinds of data that are spread across different formats, in differing granularity. You need to have a pipeline which removes all the complexities and connects many types of sources, to bring data into any type of target. Irrespective of the kind of technology you use, your data platform should be adaptive enough to bring data in from any types of sources, at any intervals, in real-time. It should handle any volume of data, structured and unstructured. That kind of pipeline is very important for any analysis, because you need to bring in data from all types of sources. Only then you can do a proper analysis of data. A data pipeline is the heart of the analysis. Overall, erwin DI is not so costly and it brings a lot of unique features, like metadata hooks and metadata harvesters, along with the business glossaries, business to business mapping, and technology mapping. The product has so many nice features. For an organization that wants to realize value from the potential of its data, it is best to go with erwin and start the journey.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2020-08-04T07:26:00Z
Aug 4, 2020
Erwin currently supports two implementations of this product: one on a SQL Server database and the other on an Oracle Database. It seems that the SQL Server database may have fewer complications than the Oracle Database. We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server. We are not currently using forward engineering capabilities of the Data Intelligence suite. We do use erwin Data Modeler for forward engineering the data definition language that is used to change the actual databases where the data resides. We are currently using the Informatica reverse smart connector so that we can understand what is in the Informatica jobs, jobs which may not have been designed with, or have, a source-to-target mapping document. That's as opposed to having a developer create data movement without any documentation to support it. We look forward to potentially using the capability to create Informatica jobs, or other types of jobs, based on the mapping work, so that we can automate our work more and decrease our delivery time and cost to deliver while increasing our accuracy of delivery. We've learned several lessons from using erwin Data Intelligence Suit. One lesson is around adoption. There will be better adoption through ease of use. We do have another product in-house and the largest complaint about that product is that it's extremely difficult to use. The ease of use with the Data Intelligence Suite has significantly improved our adoption rate. Also, having all of the information in one place has significantly improved our adoption and people's desire to use the tool, rather than looking here, there, and everywhere for their information. The automated data lineage and impact analysis being driven from the mapping documents are astounding in reducing the time to research impact analysis from six to 16 weeks down to minutes, because it's a couple of clicks with a mouse. Having all of the information in one place also improves our knowledge about where our data is and what it is so that we can use it in the best possible ways.
Data Program Manager at a non-tech company with 201-500 employees
Real User
2020-04-12T07:27:00Z
Apr 12, 2020
erwin is very good for companies who have a very structured, data governance process. It puts every possible tool around a company's data. This is very good for companies who are mature with their data. However, if a company is just looking for a tool to showcase their data in a data catalog, then I would advise companies to be careful because erwin is sometimes really complex to master and configure. Once it is set up, you have to put your hands in the gears of the software to model how your data works. It is more of a company process modeler than a directory of all data available you need and can access. Industrial companies over 30 to 40 years in age are struggling to find what data they may have and it may prove to be difficult for them to use erwin directly. What we have done with the lineage is valuable, but manual. For the IT dictionary, automation is possible. However, we have not installed the plugin that allows us to do this. Right now, all the data that we have configured for the lineage has been inputted by hand. I would rate this feature as average. We have not tested the automation. I would rate this solution as seven (out of 10) since we have not experienced all the functionalities of the product yet.
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
Real User
2020-01-30T07:55:00Z
Jan 30, 2020
If you have the ability to pull a steering committee together to talk about how your data asset metadata needs to be used in different processes or how you can connect it into mission-critical business processes so you slowly change the culture, because erwin DI is just part of the processes, that probably would be a smoother transition than what I am trying to do. I'm sitting in an office by myself trying to push it out. If I had a steering committee to help market or move it into different processes, this would be easier. Along the same lines as setting up an erwin Workgroup environment, you need to be thoughtful about how you are going to name things. You can set up catalogs and collection points for all your physical data, for instance. We had to think about if we did it by server, then every time we moved a server name, we'd have to change everything. You have to be a little careful and thoughtful about how you want to do the collections because you don't want the collection names to change every time you're changing something physically. What we did is I set up a more logical collection, so crossing all the servers. The following going into different catalogs: * The analytics reporting data sets * The business-purchased applications * External data sets * The custom applications. I'm collecting the physical metadata, and they can change that and update it. However, the structure of how I am keeping the data available for people searching for it is more logically-focused. You can update it. However, once people get used to looking in a library using the Dewey Decimal System, they don't understand if all of a sudden you reorganize by author name. So, you have to think a bit down the road as to what is going to be stable into the future. Because the more people start to get accustomed to it being organized a certain way, they're not going to understand if all of a sudden you pull the rug out from under them. I'm going to give the solution an eight (out of 10) because I'm really happy with what I've been able to do so far. The more that the community uses this tool, the more feedback they will get, and the better it will become.
Our first goals were data literacy and data as an asset. Those were our two big, ultimate goals three years ago. Data literacy turned out to be 10 times more important than a data warehouse. We could look at existing data sets and, just by educating people, it gave them an advantage almost immediately. The fact that the data governance was able to put a framework around data literacy helped us focus on the right answer, even if it wasn't the first one given. In other words, sometimes we'd have the same answer three or four times, and it would shift until we nailed it. But without governance, we would never have done that and we would have stayed the same. The secret to the success of this project was that we had a vision and we stuck to it. Governance was important to us, no matter how other people might have thought about it. In my very first data steward meeting I was introducing everybody to these brand-new terms they'd never seen, and someone in our analytical group totally derailed the meeting. So be aware that it's not going to be easy, but have a vision. And make sure that governance is important or don't bother. It's not something that a lot of people add value to at all. When you say, "Oh, I want governance so we can have a data dictionary and you can go look at it," they'll say, "I don't want to look at it, just give me a report." But the ability for those who need to do that is huge. Have a vision and stick to it and be willing to take a step back, sometimes, to go two forward. The neat thing is that we've pretty much done all of this with two to three people, for our entire organization. We do have three data teams that are using the Modeler for development — ETL SSIS stuff — but we have a pretty serious "wash, rinse, repeat" standard. If anything is in doubt, we just go back to the business rules and see what our rules are. What are our principles, and are we meeting them? As far as automating the changes through the environments, it has helped, but not a lot. It's not like it was a silver bullet. We need help there, because there's so much. There's the model, but once you promote that in different environments, sometimes you miss it because you only get three or four days to get out of QA to get it into stage. Obviously, you mitigate risks with automation. It does have an impact. As a company, we just haven't been able to take full advantage of it now, but that's our hope. We're only into it for about a year-and-a-half, even though we have run with the suite for almost three years. We're still immature. I wish I had everything at the push of one button, the "Easy" button. Some of it's over our heads. We could use some new training and we could use some additional support. erwin has been great with us, but it's also a matter of the appetite and the resources. The biggest issue is that I don't have a team of people doing what a team of people need to do to accomplish what we would like to. It's done by a small number of people on a consistent basis, and not full-time. The solution's generation of production code through automated code engineering would reduce the time it takes to go from initial concept to implementation, but we're a Microsoft shop and most of all that is done inside TFS or Visual Studio. That's how we manage all our codebase, including release management. That's all done separately and is automated. We're trying to create some interfaces between the two. We just haven't gotten there. In my three years using erwin, besides actually getting approval for the money to purchase the software, I don't think I've had a struggle with it. They've been great. When we first got on and we had some questions, they got me to the development team in England and set it all up with us without question, no extras. They just tried to make sure it worked. I would rate erwin DI for Data Governance at eight out of 10. I never give a 10 because I have yet to see perfection. It has some gaps, but I definitely think it's in the top third. As far as rates go, I don't have a lot to compare it with. It's easy now but it took going through a learning curve, but that's the case with any software. Does it need to mature a little? Possibly. But that would be it. With their roadmap, they're buying companies, and changing things, and doing things. I've been pleased.
Solution Architect at a pharma/biotech company with 10,001+ employees
Real User
2020-01-22T07:36:00Z
Jan 22, 2020
I learned how to automate in the data area and how this is very different from any CI/CD development platforms that I was working on before. I learned that we need totally different things to automate properly in the data area. We need very accurate metadata. We need precise mappings reviewed by different data stakeholders. I would rate this product as an eight (out of 10). I can imagine some capabilities for this product that would make it even better.
The erwin Data Intelligence Suite (erwin DI) combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog. Then make it accessible and understandable within context via...
I would rate erwin Data Intelligence by Quest seven out of ten. Before implementing erwin Data Intelligence by Quest, potential users should first determine their use case.
Faster delivery of data pipelines at less cost is more a question for the architect than for me, but it is possible if the metadata sources are clean and set up correctly. This is not an erwin-specific topic. My understanding is that a lot of data catalogs are dependent on what is called the "logical name" of the tables and columns. If the data store or the data analyst never labeled or created a correct lexicon for any of their metadata, then it's going to slow down the whole process, whether it's Erwin or Alation or Informatica or Calibra. erwin can make data pipelines faster, but it's dependent on how clean the metadata is and how well it was set up in the first place. And I believe it does save costs because the Medicare & Medicaid Services wouldn't be using it if it wasn't cost-effective. erwin Data Intelligence is a good platform and I wish we were still using it.
I rate erwin Data Intelligence an eight out of ten. Before implementing, you should adequately define the processes behind this tool. You need to understand the correct way to gather document metadata, set up a project in the mapping, and define the automation. If you do not have the processes sorted out, you will still have a map but won't realize all the benefits. It's all about standardization, so you can have the metadata in place. It's the same with automation. You need to understand what kinds of automation you need so you can implement it and deploy the necessary resources.
I give erwin Data Intelligence an eight out of ten. Premier Support has added minimal value to our overall investment. I recommend doing a POC for erwin Data Intelligence before moving forward to ensure that it meets all requirements.
I rate erwin Data Intelligence nine out of 10.
We had a very specific use case, and it definitely met our needs. Therefore, on a scale from one to ten, I would rate erwin Data Intelligence by Quest a nine.
My advice is to consider the advantages of one-stop shopping - do you like going to 5 different stores to get what you need?
Having all of the information to support your work and Enterprise in a single location saves both time and money in the project, data design, development, and data usage through improved data intelligence.
Consider that with erwin DI, you can have reference data, business glossary, data models, data asset information, data mapping, data lineage, impact analysis, and data quality metrics together in one place, one software that is intuitive and easy to navigate. In my experience, consolidation of information has overcome many barriers and improved project success.
I rate erwin Data Intelligence nine out of 10. LDAP integration is provided, but the roles and role integration require some research and setup.
Our only systematic process for refreshing metadata is from the erwin Data Modeler tool. Whenever those updates are done, we then have a systematic way to update the metadata in our reference tool. I would rate the product as eight out of 10. It is a good tool with a lot of good features. We have a whole laundry list of things that we are still looking for, which we have shared with them, e.g., improving stability and the product's overall health. The cost is going up, but it provides us all the information that we need. The basic building blocks of our governance are tightly coupled with this tool.
My advice for anybody who is considering this product is that it's a useful tool. It is good for lineage and good for documenting mappings. Overall, it is very useful for data warehousing, and it is not expensive compared to similar solutions on the market. I would rate this solution a nine out of ten.
We are not using erwin's AI Match feature to automatically discover and suggest relationships and associations between business terms and physical metadata. We are still trying to get all of our data completely mapped in there. After that, we will get to the next level of maturity, which would be basically leveraging in some of the additional features such as AI Match. Similarly, we have not used the feature for generating the production code through automated code engineering. Currently, we are primarily doing the automation by using Smart Data Connectors to build some data lineages, which is helping with the overall understanding of the data flow. Over the next few months, as it gets more and more updated, we might see some benefits in this area. I would expect at least 25% savings in time. It provides a real-time understandable data pipeline to some level. Advanced users can completely understand its real-time data pipeline. Average users may not be able to understand it. Any organization that is looking into implementing this type of solution should look at its data literacy and maturity in terms of data literacy. This is where I really see the big challenge. It is almost like a change management exercise to make sure people understand how to use the data and build some of the processes around the data governance. The maturity of the organization is really critical, and you should make your plans accordingly to implement it. The biggest lesson that I have learned from using this solution is probably around how to maintain the data dictionary, which is really critical for enabling data literacy. A lot of times, companies don't have these data dictionaries built. Building the data dictionary and getting it populated into the data catalog is where we spend some of the time. A development process needs to be established to create this data dictionary and maintain it going forward. You have to just make sure that it is not a one-time exercise. It is a continuous process that should be included as part of the development process. I would rate erwin Data Intelligence for Data Governance an eight out of 10. If they can improve its user interface, it will be a great product.
We haven't integrated Data Catalog and Data Literacy yet. Our client is a little bit behind on being able to utilize these aspects that we've presented for additional value. My advice would be to partner with an integrator. erwin has quite a few of them. If you're going to jump into this in earnest, you're going to need to have that experience and support. The biggest lesson I have learned is that the only limitation is the imagination. Anything is possible. There's quite a strong capability with this product. I've seen what you can come up with as far as innovative flows, processes, automation, etc. It's got quite strong capabilities. The next lesson would be in regards to how automation fits within a company's framework and to embrace automation. There are some good quality points to continue with, certainly within the data cataloging, data governance, and so forth. There's quite a bit of good capability there. I rate erwin Data Intelligence for Data Governance a nine out of ten.
It is a different experience. Collaboration and communication are very important when you want to harvest the value from the humongous amount of data that you have in your organization. All these aspects are soft aspects, but are very important when it comes to getting value from data. Data pipelines are really important because of the kinds of data that are spread across different formats, in differing granularity. You need to have a pipeline which removes all the complexities and connects many types of sources, to bring data into any type of target. Irrespective of the kind of technology you use, your data platform should be adaptive enough to bring data in from any types of sources, at any intervals, in real-time. It should handle any volume of data, structured and unstructured. That kind of pipeline is very important for any analysis, because you need to bring in data from all types of sources. Only then you can do a proper analysis of data. A data pipeline is the heart of the analysis. Overall, erwin DI is not so costly and it brings a lot of unique features, like metadata hooks and metadata harvesters, along with the business glossaries, business to business mapping, and technology mapping. The product has so many nice features. For an organization that wants to realize value from the potential of its data, it is best to go with erwin and start the journey.
Erwin currently supports two implementations of this product: one on a SQL Server database and the other on an Oracle Database. It seems that the SQL Server database may have fewer complications than the Oracle Database. We chose to implement on an Oracle Database because we also had the erwin Data Modeler and Web Portal products in-house, which have been set up on Oracle Databases for many years. Sometimes the Oracle Database installation has caused some hiccups that wouldn't necessarily have been caused if we had used SQL Server. We are not currently using forward engineering capabilities of the Data Intelligence suite. We do use erwin Data Modeler for forward engineering the data definition language that is used to change the actual databases where the data resides. We are currently using the Informatica reverse smart connector so that we can understand what is in the Informatica jobs, jobs which may not have been designed with, or have, a source-to-target mapping document. That's as opposed to having a developer create data movement without any documentation to support it. We look forward to potentially using the capability to create Informatica jobs, or other types of jobs, based on the mapping work, so that we can automate our work more and decrease our delivery time and cost to deliver while increasing our accuracy of delivery. We've learned several lessons from using erwin Data Intelligence Suit. One lesson is around adoption. There will be better adoption through ease of use. We do have another product in-house and the largest complaint about that product is that it's extremely difficult to use. The ease of use with the Data Intelligence Suite has significantly improved our adoption rate. Also, having all of the information in one place has significantly improved our adoption and people's desire to use the tool, rather than looking here, there, and everywhere for their information. The automated data lineage and impact analysis being driven from the mapping documents are astounding in reducing the time to research impact analysis from six to 16 weeks down to minutes, because it's a couple of clicks with a mouse. Having all of the information in one place also improves our knowledge about where our data is and what it is so that we can use it in the best possible ways.
erwin is very good for companies who have a very structured, data governance process. It puts every possible tool around a company's data. This is very good for companies who are mature with their data. However, if a company is just looking for a tool to showcase their data in a data catalog, then I would advise companies to be careful because erwin is sometimes really complex to master and configure. Once it is set up, you have to put your hands in the gears of the software to model how your data works. It is more of a company process modeler than a directory of all data available you need and can access. Industrial companies over 30 to 40 years in age are struggling to find what data they may have and it may prove to be difficult for them to use erwin directly. What we have done with the lineage is valuable, but manual. For the IT dictionary, automation is possible. However, we have not installed the plugin that allows us to do this. Right now, all the data that we have configured for the lineage has been inputted by hand. I would rate this feature as average. We have not tested the automation. I would rate this solution as seven (out of 10) since we have not experienced all the functionalities of the product yet.
If you have the ability to pull a steering committee together to talk about how your data asset metadata needs to be used in different processes or how you can connect it into mission-critical business processes so you slowly change the culture, because erwin DI is just part of the processes, that probably would be a smoother transition than what I am trying to do. I'm sitting in an office by myself trying to push it out. If I had a steering committee to help market or move it into different processes, this would be easier. Along the same lines as setting up an erwin Workgroup environment, you need to be thoughtful about how you are going to name things. You can set up catalogs and collection points for all your physical data, for instance. We had to think about if we did it by server, then every time we moved a server name, we'd have to change everything. You have to be a little careful and thoughtful about how you want to do the collections because you don't want the collection names to change every time you're changing something physically. What we did is I set up a more logical collection, so crossing all the servers. The following going into different catalogs: * The analytics reporting data sets * The business-purchased applications * External data sets * The custom applications. I'm collecting the physical metadata, and they can change that and update it. However, the structure of how I am keeping the data available for people searching for it is more logically-focused. You can update it. However, once people get used to looking in a library using the Dewey Decimal System, they don't understand if all of a sudden you reorganize by author name. So, you have to think a bit down the road as to what is going to be stable into the future. Because the more people start to get accustomed to it being organized a certain way, they're not going to understand if all of a sudden you pull the rug out from under them. I'm going to give the solution an eight (out of 10) because I'm really happy with what I've been able to do so far. The more that the community uses this tool, the more feedback they will get, and the better it will become.
Our first goals were data literacy and data as an asset. Those were our two big, ultimate goals three years ago. Data literacy turned out to be 10 times more important than a data warehouse. We could look at existing data sets and, just by educating people, it gave them an advantage almost immediately. The fact that the data governance was able to put a framework around data literacy helped us focus on the right answer, even if it wasn't the first one given. In other words, sometimes we'd have the same answer three or four times, and it would shift until we nailed it. But without governance, we would never have done that and we would have stayed the same. The secret to the success of this project was that we had a vision and we stuck to it. Governance was important to us, no matter how other people might have thought about it. In my very first data steward meeting I was introducing everybody to these brand-new terms they'd never seen, and someone in our analytical group totally derailed the meeting. So be aware that it's not going to be easy, but have a vision. And make sure that governance is important or don't bother. It's not something that a lot of people add value to at all. When you say, "Oh, I want governance so we can have a data dictionary and you can go look at it," they'll say, "I don't want to look at it, just give me a report." But the ability for those who need to do that is huge. Have a vision and stick to it and be willing to take a step back, sometimes, to go two forward. The neat thing is that we've pretty much done all of this with two to three people, for our entire organization. We do have three data teams that are using the Modeler for development — ETL SSIS stuff — but we have a pretty serious "wash, rinse, repeat" standard. If anything is in doubt, we just go back to the business rules and see what our rules are. What are our principles, and are we meeting them? As far as automating the changes through the environments, it has helped, but not a lot. It's not like it was a silver bullet. We need help there, because there's so much. There's the model, but once you promote that in different environments, sometimes you miss it because you only get three or four days to get out of QA to get it into stage. Obviously, you mitigate risks with automation. It does have an impact. As a company, we just haven't been able to take full advantage of it now, but that's our hope. We're only into it for about a year-and-a-half, even though we have run with the suite for almost three years. We're still immature. I wish I had everything at the push of one button, the "Easy" button. Some of it's over our heads. We could use some new training and we could use some additional support. erwin has been great with us, but it's also a matter of the appetite and the resources. The biggest issue is that I don't have a team of people doing what a team of people need to do to accomplish what we would like to. It's done by a small number of people on a consistent basis, and not full-time. The solution's generation of production code through automated code engineering would reduce the time it takes to go from initial concept to implementation, but we're a Microsoft shop and most of all that is done inside TFS or Visual Studio. That's how we manage all our codebase, including release management. That's all done separately and is automated. We're trying to create some interfaces between the two. We just haven't gotten there. In my three years using erwin, besides actually getting approval for the money to purchase the software, I don't think I've had a struggle with it. They've been great. When we first got on and we had some questions, they got me to the development team in England and set it all up with us without question, no extras. They just tried to make sure it worked. I would rate erwin DI for Data Governance at eight out of 10. I never give a 10 because I have yet to see perfection. It has some gaps, but I definitely think it's in the top third. As far as rates go, I don't have a lot to compare it with. It's easy now but it took going through a learning curve, but that's the case with any software. Does it need to mature a little? Possibly. But that would be it. With their roadmap, they're buying companies, and changing things, and doing things. I've been pleased.
I learned how to automate in the data area and how this is very different from any CI/CD development platforms that I was working on before. I learned that we need totally different things to automate properly in the data area. We need very accurate metadata. We need precise mappings reviewed by different data stakeholders. I would rate this product as an eight (out of 10). I can imagine some capabilities for this product that would make it even better.