Learn what your peers think about erwin Data Modeler by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
VP Enterprise Data Architecture at a financial services firm with 5,001-10,000 employees
Real User
Top 20
2023-02-21T17:07:00Z
Feb 21, 2023
The solution is excellent in providing a visual representation of a database and can generate DDL for implementing changes. We use DDL for logical purposes to review with business people, ensuring they have the required fields for processing. We also use it as a data dictionary for the physical data model to understand all the purposes of the terms. This helps us map the logical and physical terms with the business definition to understand our data.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2021-02-03T23:45:11Z
Feb 3, 2021
It's difficult to name one thing I like most about erwin DM. The integration with DIS is key to helping an organization understand their data, and the forward engineering of ddl automates code generation, reducing manual effort, increasing consistency with governed naming conventions, and enabling faster delivery. I think the thing I like most about erwin DM is the centralized data model repository, which transparently shares information with all Data Modelers while protecting the intellectual property in data models.
Data Management & Automation Manager at a consultancy with 11-50 employees
Reseller
2021-11-23T18:37:00Z
Nov 23, 2021
The most valuable features are the ability to reverse engineer and do model comparison. With the reverse engineering, I can understand the databases from third-party products. With the model comparison, I can track the differences between two versions of the same database.
Senior Consultant at a tech services company with 11-50 employees
Real User
2021-08-17T15:33:00Z
Aug 17, 2021
Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful.
Senior Data Architect at a financial services firm with 10,001+ employees
Real User
2021-06-07T22:56:00Z
Jun 7, 2021
The solution's ability to compare and synchronize data sources with data models is fantastic. We use it for that on a regular basis to make sure that changes haven't been made to the database outside of the modeling process. I can take existing databases and reverse engineer them and understand their structure within 15 minutes. If I didn't have Data Modeler it would take hours. It increases our productivity and helps in understanding our legacy application.
Senior Data Warehouse Architect at a financial services firm with 1,001-5,000 employees
Real User
2020-12-29T10:56:00Z
Dec 29, 2020
The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc.
Independent Consultant at a tech consulting company with 1-10 employees
Real User
2020-10-19T09:50:00Z
Oct 19, 2020
The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.
Data Management & Automation Manager at a consultancy with 11-50 employees
Reseller
2020-10-19T09:50:00Z
Oct 19, 2020
The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically.
We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team.
Data Modeler at a government with 10,001+ employees
Real User
2020-10-14T06:37:00Z
Oct 14, 2020
It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features.
erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for.
President at Global Retail Technology Advisors, LLC
Real User
Top 10
2020-07-26T08:19:00Z
Jul 26, 2020
It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars.
We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2020-06-30T08:17:00Z
Jun 30, 2020
The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables and columns or the entities and attributes, ensures that the users understand what the data is so that they can use it best and most successfully.
Technical Consultant at a insurance company with 1,001-5,000 employees
Real User
2020-06-25T10:53:00Z
Jun 25, 2020
Any tool will do diagramming but I think the ability to put the stuff up in a graphical fashion, then think about it, and keep things consistent is what's valuable about it. It's too easy when you're using other methods to not have naming consistent standards and column consistent definitions, et cetera.
Sr. Data Engineer at a healthcare company with 10,001+ employees
Real User
2020-06-25T10:53:00Z
Jun 25, 2020
What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes.
Data Modeler at a logistics company with 10,001+ employees
Real User
2020-04-13T06:27:00Z
Apr 13, 2020
We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa.
Technology Manager at a pharma/biotech company with 10,001+ employees
Real User
2020-03-22T06:49:00Z
Mar 22, 2020
The most valuable features are being able to visualize the data in the diagrams and transform those diagrams into physical database deployments. These features help, specifically, to integrate the data. When the source data is accumulated and modeled, the target model is in erwin and it helps resolve the data integration patterns that are required to map the data to accommodate a model.
Enterprise Data Architect at a energy/utilities company with 1,001-5,000 employees
Real User
2020-02-13T07:51:00Z
Feb 13, 2020
It's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that.
EDW Architect/ Data Modeler at Royal Bank of Canada
Real User
2020-02-02T10:42:00Z
Feb 2, 2020
The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database.
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
Real User
2020-01-27T06:40:00Z
Jan 27, 2020
When you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable.
VP Enterprise Data Architecture at a financial services firm with 5,001-10,000 employees
Real User
Top 20
2020-01-22T10:49:00Z
Jan 22, 2020
The most valuable feature is the physical or visual representation of the database, showing the tables, the columns, the foreign keys, and the ability to generate DDL, so you can physically implement databases.
erwin pioneered data modeling, and erwin Data Modeler (erwin DM) remains trusted, award-winning software for data modeling and database design, automating complex and time-consuming tasks. Use it to discover and document any data from anywhere for consistency, clarity and artifact reuse across large-scale data integration, master data management, metadata management, Big Data, business intelligence and analytics initiatives – all while supporting data governance and intelligence efforts.
Forward engineering, DDL generation, reverse engineering, and reporting are the most valuable features of the solution.
The product allows us to reuse entities and attributes.
Drag-and-drop data modeling and reverse engineering out of databases are the most valuable features of erwin Data Modeler by Quest.
We can create mappings in erwin and possibly data dictionaries.
The fitting model is very intuitive.
I have worked with erwin Data Modeler for quite some time and familiarity is its most valuable feature.
It is a scalable solution...The technical support team is fine.
The solution is excellent in providing a visual representation of a database and can generate DDL for implementing changes. We use DDL for logical purposes to review with business people, ensuring they have the required fields for processing. We also use it as a data dictionary for the physical data model to understand all the purposes of the terms. This helps us map the logical and physical terms with the business definition to understand our data.
The data lineage feature is very valuable.
It's difficult to name one thing I like most about erwin DM. The integration with DIS is key to helping an organization understand their data, and the forward engineering of ddl automates code generation, reducing manual effort, increasing consistency with governed naming conventions, and enabling faster delivery. I think the thing I like most about erwin DM is the centralized data model repository, which transparently shares information with all Data Modelers while protecting the intellectual property in data models.
The principal feature that I liked is that the solution has a very graphic interface.
They have a lot of features and the most up-to-date technology integration, which I haven't seen in other products.
It provides flexibility with the code. You can change the code as you want. Basically, you can change SQL based on what's best for your project.
The most valuable features are the ability to reverse engineer and do model comparison. With the reverse engineering, I can understand the databases from third-party products. With the model comparison, I can track the differences between two versions of the same database.
Being able to point it to a database and then pull the metadata is a valuable feature. Another valuable feature is being able to rearrange the model so that we can display it to users. We are able to divide the information into subject areas, and we can divide the data landscape into smaller chunks, which makes it easier to understand. If you had 14 subject areas, 1,000 entities, and 6,000 columns, you can't quite understand it all at once. So, being able to have the same underlying model but only display portions of it at a time is extremely useful.
The solution's ability to compare and synchronize data sources with data models is fantastic. We use it for that on a regular basis to make sure that changes haven't been made to the database outside of the modeling process. I can take existing databases and reverse engineer them and understand their structure within 15 minutes. If I didn't have Data Modeler it would take hours. It increases our productivity and helps in understanding our legacy application.
The logical model gives developers, as well as the data modelers, an understanding of exactly how each object interacts with the others, whether a one-to-many, many-to-many, many-to-one, etc.
The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.
The ability to collaborate between different members across the organization is the most valuable feature. It gives us the ability to work on the same model, regardless of where we are physically.
We use the macros with naming standards patterns, domains, datatypes, and some common attributes. As far as other automations, a feature of the Bulk Editor is mass updates. When it sees something is nonstandard or inaccurate, it will export the better data out. Then, I can easily see which entities and attributes are not inline or standard. I can easily make changes to what was uploaded to the Bulk Editor. When taking on a new project, it can save you about a half a day on a big project across an entire team.
It's a safeguard for me because I'm always concerned that somebody is free handing it and will forget a key coming from the parent. The migrating keys are a great feature. Identifying relationships, non-identifying relationships, and being visually right there to understand the differences are great features.
erwin is key to being able to visually understand whatever the customer is requesting. They'll give you words on a paper, but once they can actually view it as a picture, it really comes to life. The data comes to life to where they understand exactly what they're asking for.
It reduces monthly savings by hundreds of thousands of dollars. Think about a company like Costco and all of the points of sale systems in Costco, all of the systems, the applications, but if all the applications in Costco all had their own data model, trying to integrate those, upgrade them and manage their different versions of the same model throughout the store, is an absolute nightmare. It's phenomenally expensive. This helps reduce that cost significantly. I'm talking on the orders of hundreds of thousands of dollars.
We find that its ability to generate database code from a model for a wide array of data sources cuts development time. The ability to create one model in your design phase and then have it generate DDL code for Oracle or Teradata, or whichever environment you need is really nice. It's not only nice but it also saves man-hours of time. You would have to take your design and just type in manually. It has to take days off out of the work.
The visual data models for helping to overcome data source complexity and enabling understanding and collaboration around maintenance and usage are excellent. A picture speaks 1,000 words. Seeing a picture that shows you how the data relates to each other helps you better understand what the data is and how to use it. Pairing that information with a dictionary, which has the definitions of the tables and columns or the entities and attributes, ensures that the users understand what the data is so that they can use it best and most successfully.
Any tool will do diagramming but I think the ability to put the stuff up in a graphical fashion, then think about it, and keep things consistent is what's valuable about it. It's too easy when you're using other methods to not have naming consistent standards and column consistent definitions, et cetera.
What has been useful, I have been able to reverse engineer our existing data models to document explicitly referential integrity relationships, primary/foreign keys in the model, and create ERDs that are subject area-based which our clients can use when working with our databases. The reality is that our databases are not explicitly documented in the DDL with primary/foreign key relationships. You can't look at the DDL and explicitly understand the primary/foreign key relationships that exist between our tables, so the referential integrity is not easily understood. erwin has allowed me to explicitly document that and create ERDs. This has made it easier for our clients to consume our databases for their own purposes.
We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa.
The most valuable features are being able to visualize the data in the diagrams and transform those diagrams into physical database deployments. These features help, specifically, to integrate the data. When the source data is accumulated and modeled, the target model is in erwin and it helps resolve the data integration patterns that are required to map the data to accommodate a model.
It's important to create standard templates — Erwin is good at that — and you can customize them. You can create a standard template so that your models have the same look and feel. And then, anyone using the tool is using the same font and the same general layout. erwin's very good at helping enforce that.
The solution’s code generation ensures accurate engineering of data sources, as there is no development time. Code doesn't even have to be reviewed. We have been using this solution for so long and all the code which has been generated is accurate with the requirements. Once we generate the DDLs out of the erwin tools, the development team does a quick review of the script line by line. They will just be running the script on the database and looking into other requirements, such as the index. So, there is less effort from development side to create tables or build a database.
When you're getting down to the database level, where you're building a design and you're creating DDL out of it, or you're going in the other direction where you're reaching into system catalogs and bringing things back, that starts to really require specialization. Visio isn't going to reverse-engineer that for you. Those features in erwin are valuable.
The most valuable feature is the physical or visual representation of the database, showing the tables, the columns, the foreign keys, and the ability to generate DDL, so you can physically implement databases.