Project Coordinator at a computer software company with 201-500 employees
Real User
Top 20
2023-07-07T20:05:00Z
Jul 7, 2023
The use cases were for a large federal government agency with 10 smaller agencies that handle all of the metadata for that agency, and all of that metadata is sensitive PHI or PII. It includes Social Security numbers and all of the metadata for the provider, and beneficiary health records. The purpose of the agencies is to prevent fraud, waste, and abuse of American taxpayers' money. One of the biggest use cases is to do mappings, manual and auto mappings, and data lineage. The data is used by the agency for prosecution when they find fraud and waste. At a very high level, what the Medicare or Medicaid services want is the ability to ingest their metadata so that there is transparency. They also want it to be up to date, and the ability to interpret it, both technically and for business use cases, meaning business terms, policies, and rules. They want end-users with different levels of technical acumen to be able to find information easily and quickly. A data store would be a quasi-technical person, like me, who understands enough to retrieve information or a lineage. A business user would be either a congressman or congresswoman or a project manager who would see visual representations. erwin has a lot of really good data visualization capabilities. The use cases include being able to quickly look at data and evaluate it on many levels, from the granular level of a column, table, or even a view, to a zoomed-out level to see how many of a certain table or column are in a data set from each agency. Another use case is to take the data out of legacy tools, like Excel spreadsheets. And in some cases, agencies are still using a mainframe from the 1960s where some of that data is stuck.
We use Data Intelligence as a metadata repository for our search target systems and to define metadata. We also use the tool to define the mapping between the search and the targets. It enables us to track the flow of data between systems, design data flows, and share flow implementation with developers. Our end-users seldom access Data Intelligence. We use other tools to provide end-users access to our metadata repository. Data Intelligence is used internally for projects and users with project-oriented roles.
We use erwin Data Intelligence to map the data structures from the source systems to our logical data model. Based on this mapping, the tool automatically generates ETL procedures for us.
Works at a insurance company with 5,001-10,000 employees
Real User
2022-12-18T06:56:00Z
Dec 18, 2022
We are a large company, and we purchase a lot of small, medium, and large companies and roll them into our IT. As a result, we have a lot of challenges with mapping all of these systems. We brought in erwin Data Intelligence by Quest to automate some of the data transformations and get a head start on it. We map corporate entities to these business units, which means that we deal with a lot of different data sources. The biggest use case we have is mapping things like an older mainframe-type database to SQL Server, which erwin does really well. We're super impressed with it. erwin's staff were great to work with in terms of customization. For example, if we told them that we'd like customized connectors to generate DDL and SSIS packages, then in a couple of weeks they would usually have those features put in. They were very flexible and willing to make the changes on the fly. We had good direct access to the development staff, and they were great to work with.
Learn what your peers think about erwin Data Intelligence by Quest. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
Data Intelligence enables us to provide deeper technical insight into our enterprise data warehouse while democratizing the solution and data. For more than 10 years, we had built our data systems without detailed documentation. We finally determined that we needed to improve our data management, and we chose Data Intelligence Suite (DIS) based on our past experience using erwin Data Modeler. After researching DIS, we also discovered other desirable features, such as the Business Glossary and Mind Map features that link various assets.
Architect at a insurance company with 10,001+ employees
Real User
2022-08-10T05:48:00Z
Aug 10, 2022
The Data Intelligence suite helps us manage all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions. These are all managed in this tool. There is definitely extended use of this data set, which includes using the metadata that we have built. Using that metadata, we also integrate with other ETL tools to pull the metadata and use it in our data transformations. It is very tightly coupled with our data processing in general. We also use erwin Data Modeler, which helps us build our metadata, business definitions, and the physical data model of the data structures that we have to manage. These two tools work hand-in-hand to manage our data governance metadata capabilities and support many business processes. I manage the data architecture plus manage the whole data governance team who designed the data pipelines. We designed the overall data infrastructure plus the actual governance processes. The stewards, who work with the data in the business, set up the metadata and manage this tool everyday end-to-end.
Business Intelligence BA at a insurance company with 10,001+ employees
Real User
2021-05-24T17:09:00Z
May 24, 2021
Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents. As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.
Senior Director at a retailer with 10,001+ employees
Real User
2021-05-16T04:21:00Z
May 16, 2021
Our primary use case is that we want to enable self-service for different business teams to be able to find different data. We are using erwin Data Intelligence platform to enable data literacy and to enable different users to be able to find the data by using the data catalog. It can be hosted on-premise or in the cloud. We chose to run it in the cloud because the rest of our analytics infrastructure is running in the cloud. It made natural sense to host it in the cloud.
We use DI for Data Governance as part of a large system migration supporting application refresh and multi-site consolidation. Metadata Manager is utilized to harvest metadata which is augmented with custom metadata properties identifying rules criteria which drive automated source to target mapping. Custom build code generation connector then automates forward engineering code generation groovy. We've developed a small number of connectors supporting this 1:1 data migration. It's a really good product that we've been able to make very good use of.
Practice Director - Digital & Analytics Practice at HCL Technologies
Real User
2020-10-14T06:37:00Z
Oct 14, 2020
Our clients use it to understand where data resides, for data cataloging purposes. It is also used for metadata harvesting, for reverse engineering, and for scripting to build logic and to model data jobs. It's used in multiple ways and to solve different types of problems.
Architecture Sr. Manager, Data Design & Metadata Mgmt at a insurance company with 10,001+ employees
Real User
2020-08-04T07:26:00Z
Aug 4, 2020
We have many use cases. We have a use case to understand our metadata, understand where it is, and understand where our authoritative data systems are. We need to understand the data systems that we have. We also need to link the data models that we have to these data systems so that we know which data models are supporting which database applications. We're also linking our business data models to our physical implementation so that our data governance team is driving our data and our understanding of our data. That is one use case for the Metadata Manager. Another is the creation of automated reports that will show the changes that are made in production after a production release. Our use cases for the Mapping Manager are around understanding where our data movement is happening and how our data is being transformed as it's moved. We want automated data lineage capabilities at the system database environment table and column levels, as well as automated impact analysis. If someone needs to make a change to a specific column in a specific database, what downstream applications or databases will be impacted? Who do we have to contact to tell that we're making changes? When thinking about the Mapping Manager, we do have another use case where we want to understand not only the data design of the mapping, but the actual implementations of the mapping. We want to understand, from a data design standpoint, the data lineage that's in the data model, as well as the data lineage in a source-to-target mapping document. But we also want to understand the as-implemented data lineage, which comes in our Informatica workflows and jobs. So we want to automatically ingest our Informatica jobs and create mapping documents from those jobs so that we have the as-designed data lineage, as well as the as-implemented data lineage. In addition, with regard to our data literacy, we want to understand our business terminology and the definitions of our business terms. That information drives not only our data modeling, but it drives our understanding of the data that is in our datastores, which are cataloged in the Metadata Manager. This further helps us to understand what we're mapping in our source-to-target mapping documents in the Mapping Manager. We want to associate our physical columns and our data model information with our business glossary. But taking that a step further, when you think about code sets, we also need to understand the data. So if we have a specific code set, we need to understand if we are going to see those specific codes in that database, or if we are going to see different codes that we have to map to the governed code set. That's where the Codeset Manager comes into play for us because we need to understand what our governed code sets are. And we need to understand and automatically be able to map our code sets to our business terminology, which is automatically linked to our physical tables and columns. And that automatically links the code set values or the crosswalks that were created when we have a data asset that does not have all of the conforming values that are in the governed code set. We also have reporting use cases. We create a lot of reports. We have reports to understand who the Data Intelligence Suite users are, when they last logged in, the work that they're doing, and for automatically assigning work from one person to another person. We also need automated reports that look at our mappings and help us understand where our gaps are, where we need a code set that we don't already have a governed code set for. And we're also creating data dictionary reports, because we want to understand very specific information about our data models, our datastores, and our business data models, as well as the delivery data models. We are currently using the * Resource Manager * Metadata Manager * Mapping Manager * Codeset Manager * Reference Data Manager * Business Glossary Manager.
Data Program Manager at a non-tech company with 201-500 employees
Real User
2020-04-12T07:27:00Z
Apr 12, 2020
This solution is still an experiment for us. My company is in the process of defining the data governance process, which is not settled right now. We have used erwin DG for the purpose of getting acquainted with data governance from a technical point of view. We want to see how it fits in our organization because data governance is neither IT nor a business matter. It is in-between. We have to put the proper organization in place in order for an IT solution to meet all the requirements. This has been in the works for almost two years now, where we have been formerly under an experiment with erwin DG. We are not fully using it as we would if it were in production running regular operations. What we have done with the tool is have a metamodel for our data and try to see how it fits with the requirements of our project, businesses, and IT. We have two cases that are fully documented under erwin DG. What we are trying to do right now is to integrate all our regulatory obligations, including laws and regulations at the French and European levels. This would enable us to make a bridge between the businesses and the law. This is a SaaS solution maintained by erwin.
Sr. Manager, Data Governance at a insurance company with 501-1,000 employees
Real User
2020-01-30T07:55:00Z
Jan 30, 2020
We don't have all of the EDGE products. We are using the Data Intelligence Suite (DI). So, we don't have the enterprise architecture piece, but you can pick them up in a modular form as part of the EDGE Suite. The Data Intelligence Suite of the EDGE tool is very focused on asset management. You have a metadata manager that you can schedule to harvest all of your servers, cataloging information. So, it brings back the database, tables, columns and all of the information about it into a repository. It also has the ability to build ETL specs. With Mapping Manager, you then take your list of assets and connect them together as a Source-to-Target with the transformation rules that you can set up as reusable pieces in a library. The DBAs can use it for all different types of value-add from their side of the house. They have the ability to see particular aspects, such as RPII, and there are some neat reports which show that. They are able manage who can look at these different pieces of information. That's the physical side of the house, and they also have what they call data literacy, which is the data glossary side of the house. This is more business-facing. You can create directories that they call catalogs, and inside of those, you can build logical naming conventions to put definitions on. It all connects together. You can map the business understanding in your glossary back to your physical so you can see it both ways.
We're a medical company and we have our own source systems that process claims from multiple organizations or health plans. In our world, there are about 17 different health plans. Within each of those health plans, the membership, or the patients, have multiple lines of businesses, and the way our company is organized, we're in three different markets with up to 17 different IPAs (Independent Physician Associations). While that is a mouthful, because of data governance, and our having own data governance tool, we understand those are key concepts and that is our use case: so that everybody in our organization knows what we are talking about. Whether it is an institutional claim, a professional claim, Blue Cross or Blue Shield, health plan payer, group titles, names, etc., our case represents 18 different titles. For us, there was a massive number of concepts and we didn't have any centralized data dictionary of our data. Our company had grown over the course of 20 years. We went from one IPA and one health plan to where we are today: in five markets, doing three major lines of businesses, etc. The medical industry in general is about 20 years behind, technology-wise, in most cases; there are a lot of manual processes. Our test use case was to start from fresh after 20 years of experience and evolution and just start over. I was given the opportunity to build a data strategy, a three-year plan where we build a repository of all sources of truth data used in governance. We have our mapping, our design, our data linkage, principles, business rules, and data stewardship program. Three years later, here we are.
Solution Architect at a pharma/biotech company with 10,001+ employees
Real User
2020-01-22T07:36:00Z
Jan 22, 2020
The three big areas that we use it for right now: metadata management as a whole, versioning of metadata, and metadata mappings and automation. We have started to adopt data profiling from this tool, but it is an ongoing process. I will be adding these capabilities to my team probably in Q1 of this year.
The erwin Data Intelligence Suite (erwin DI) combines data catalog and data literacy capabilities for greater awareness of and access to available data assets, guidance on their use, and guardrails to ensure data policies and best practices are followed. Automatically harvest, transform and feed metadata from a wide array of data sources, operational processes, business applications and data models into a central data catalog. Then make it accessible and understandable within context via...
We are a consultant for erwin Data Intelligence by Quest and provide the service to our customers for data access discovery.
The use cases were for a large federal government agency with 10 smaller agencies that handle all of the metadata for that agency, and all of that metadata is sensitive PHI or PII. It includes Social Security numbers and all of the metadata for the provider, and beneficiary health records. The purpose of the agencies is to prevent fraud, waste, and abuse of American taxpayers' money. One of the biggest use cases is to do mappings, manual and auto mappings, and data lineage. The data is used by the agency for prosecution when they find fraud and waste. At a very high level, what the Medicare or Medicaid services want is the ability to ingest their metadata so that there is transparency. They also want it to be up to date, and the ability to interpret it, both technically and for business use cases, meaning business terms, policies, and rules. They want end-users with different levels of technical acumen to be able to find information easily and quickly. A data store would be a quasi-technical person, like me, who understands enough to retrieve information or a lineage. A business user would be either a congressman or congresswoman or a project manager who would see visual representations. erwin has a lot of really good data visualization capabilities. The use cases include being able to quickly look at data and evaluate it on many levels, from the granular level of a column, table, or even a view, to a zoomed-out level to see how many of a certain table or column are in a data set from each agency. Another use case is to take the data out of legacy tools, like Excel spreadsheets. And in some cases, agencies are still using a mainframe from the 1960s where some of that data is stuck.
We use Data Intelligence as a metadata repository for our search target systems and to define metadata. We also use the tool to define the mapping between the search and the targets. It enables us to track the flow of data between systems, design data flows, and share flow implementation with developers. Our end-users seldom access Data Intelligence. We use other tools to provide end-users access to our metadata repository. Data Intelligence is used internally for projects and users with project-oriented roles.
We use erwin Data Intelligence to map the data structures from the source systems to our logical data model. Based on this mapping, the tool automatically generates ETL procedures for us.
Data Intelligence is a data management solution that connects to various data sources. It also provides data profiling and data quality management.
We are a large company, and we purchase a lot of small, medium, and large companies and roll them into our IT. As a result, we have a lot of challenges with mapping all of these systems. We brought in erwin Data Intelligence by Quest to automate some of the data transformations and get a head start on it. We map corporate entities to these business units, which means that we deal with a lot of different data sources. The biggest use case we have is mapping things like an older mainframe-type database to SQL Server, which erwin does really well. We're super impressed with it. erwin's staff were great to work with in terms of customization. For example, if we told them that we'd like customized connectors to generate DDL and SSIS packages, then in a couple of weeks they would usually have those features put in. They were very flexible and willing to make the changes on the fly. We had good direct access to the development staff, and they were great to work with.
Data Intelligence enables us to provide deeper technical insight into our enterprise data warehouse while democratizing the solution and data. For more than 10 years, we had built our data systems without detailed documentation. We finally determined that we needed to improve our data management, and we chose Data Intelligence Suite (DIS) based on our past experience using erwin Data Modeler. After researching DIS, we also discovered other desirable features, such as the Business Glossary and Mind Map features that link various assets.
The Data Intelligence suite helps us manage all our data governance activities around setting up metadata, data lineage, business clusters, and business metadata definitions. These are all managed in this tool. There is definitely extended use of this data set, which includes using the metadata that we have built. Using that metadata, we also integrate with other ETL tools to pull the metadata and use it in our data transformations. It is very tightly coupled with our data processing in general. We also use erwin Data Modeler, which helps us build our metadata, business definitions, and the physical data model of the data structures that we have to manage. These two tools work hand-in-hand to manage our data governance metadata capabilities and support many business processes. I manage the data architecture plus manage the whole data governance team who designed the data pipelines. We designed the overall data infrastructure plus the actual governance processes. The stewards, who work with the data in the business, set up the metadata and manage this tool everyday end-to-end.
Our work involves data warehousing and we originally implemented this product because we needed a tool to document our mapping documents. As a company, we are not heavily invested in the cloud. Our on-premises deployment may change in the future but it depends on infrastructure decisions.
Our primary use case is that we want to enable self-service for different business teams to be able to find different data. We are using erwin Data Intelligence platform to enable data literacy and to enable different users to be able to find the data by using the data catalog. It can be hosted on-premise or in the cloud. We chose to run it in the cloud because the rest of our analytics infrastructure is running in the cloud. It made natural sense to host it in the cloud.
We use DI for Data Governance as part of a large system migration supporting application refresh and multi-site consolidation. Metadata Manager is utilized to harvest metadata which is augmented with custom metadata properties identifying rules criteria which drive automated source to target mapping. Custom build code generation connector then automates forward engineering code generation groovy. We've developed a small number of connectors supporting this 1:1 data migration. It's a really good product that we've been able to make very good use of.
Our clients use it to understand where data resides, for data cataloging purposes. It is also used for metadata harvesting, for reverse engineering, and for scripting to build logic and to model data jobs. It's used in multiple ways and to solve different types of problems.
We have many use cases. We have a use case to understand our metadata, understand where it is, and understand where our authoritative data systems are. We need to understand the data systems that we have. We also need to link the data models that we have to these data systems so that we know which data models are supporting which database applications. We're also linking our business data models to our physical implementation so that our data governance team is driving our data and our understanding of our data. That is one use case for the Metadata Manager. Another is the creation of automated reports that will show the changes that are made in production after a production release. Our use cases for the Mapping Manager are around understanding where our data movement is happening and how our data is being transformed as it's moved. We want automated data lineage capabilities at the system database environment table and column levels, as well as automated impact analysis. If someone needs to make a change to a specific column in a specific database, what downstream applications or databases will be impacted? Who do we have to contact to tell that we're making changes? When thinking about the Mapping Manager, we do have another use case where we want to understand not only the data design of the mapping, but the actual implementations of the mapping. We want to understand, from a data design standpoint, the data lineage that's in the data model, as well as the data lineage in a source-to-target mapping document. But we also want to understand the as-implemented data lineage, which comes in our Informatica workflows and jobs. So we want to automatically ingest our Informatica jobs and create mapping documents from those jobs so that we have the as-designed data lineage, as well as the as-implemented data lineage. In addition, with regard to our data literacy, we want to understand our business terminology and the definitions of our business terms. That information drives not only our data modeling, but it drives our understanding of the data that is in our datastores, which are cataloged in the Metadata Manager. This further helps us to understand what we're mapping in our source-to-target mapping documents in the Mapping Manager. We want to associate our physical columns and our data model information with our business glossary. But taking that a step further, when you think about code sets, we also need to understand the data. So if we have a specific code set, we need to understand if we are going to see those specific codes in that database, or if we are going to see different codes that we have to map to the governed code set. That's where the Codeset Manager comes into play for us because we need to understand what our governed code sets are. And we need to understand and automatically be able to map our code sets to our business terminology, which is automatically linked to our physical tables and columns. And that automatically links the code set values or the crosswalks that were created when we have a data asset that does not have all of the conforming values that are in the governed code set. We also have reporting use cases. We create a lot of reports. We have reports to understand who the Data Intelligence Suite users are, when they last logged in, the work that they're doing, and for automatically assigning work from one person to another person. We also need automated reports that look at our mappings and help us understand where our gaps are, where we need a code set that we don't already have a governed code set for. And we're also creating data dictionary reports, because we want to understand very specific information about our data models, our datastores, and our business data models, as well as the delivery data models. We are currently using the * Resource Manager * Metadata Manager * Mapping Manager * Codeset Manager * Reference Data Manager * Business Glossary Manager.
This solution is still an experiment for us. My company is in the process of defining the data governance process, which is not settled right now. We have used erwin DG for the purpose of getting acquainted with data governance from a technical point of view. We want to see how it fits in our organization because data governance is neither IT nor a business matter. It is in-between. We have to put the proper organization in place in order for an IT solution to meet all the requirements. This has been in the works for almost two years now, where we have been formerly under an experiment with erwin DG. We are not fully using it as we would if it were in production running regular operations. What we have done with the tool is have a metamodel for our data and try to see how it fits with the requirements of our project, businesses, and IT. We have two cases that are fully documented under erwin DG. What we are trying to do right now is to integrate all our regulatory obligations, including laws and regulations at the French and European levels. This would enable us to make a bridge between the businesses and the law. This is a SaaS solution maintained by erwin.
We don't have all of the EDGE products. We are using the Data Intelligence Suite (DI). So, we don't have the enterprise architecture piece, but you can pick them up in a modular form as part of the EDGE Suite. The Data Intelligence Suite of the EDGE tool is very focused on asset management. You have a metadata manager that you can schedule to harvest all of your servers, cataloging information. So, it brings back the database, tables, columns and all of the information about it into a repository. It also has the ability to build ETL specs. With Mapping Manager, you then take your list of assets and connect them together as a Source-to-Target with the transformation rules that you can set up as reusable pieces in a library. The DBAs can use it for all different types of value-add from their side of the house. They have the ability to see particular aspects, such as RPII, and there are some neat reports which show that. They are able manage who can look at these different pieces of information. That's the physical side of the house, and they also have what they call data literacy, which is the data glossary side of the house. This is more business-facing. You can create directories that they call catalogs, and inside of those, you can build logical naming conventions to put definitions on. It all connects together. You can map the business understanding in your glossary back to your physical so you can see it both ways.
We're a medical company and we have our own source systems that process claims from multiple organizations or health plans. In our world, there are about 17 different health plans. Within each of those health plans, the membership, or the patients, have multiple lines of businesses, and the way our company is organized, we're in three different markets with up to 17 different IPAs (Independent Physician Associations). While that is a mouthful, because of data governance, and our having own data governance tool, we understand those are key concepts and that is our use case: so that everybody in our organization knows what we are talking about. Whether it is an institutional claim, a professional claim, Blue Cross or Blue Shield, health plan payer, group titles, names, etc., our case represents 18 different titles. For us, there was a massive number of concepts and we didn't have any centralized data dictionary of our data. Our company had grown over the course of 20 years. We went from one IPA and one health plan to where we are today: in five markets, doing three major lines of businesses, etc. The medical industry in general is about 20 years behind, technology-wise, in most cases; there are a lot of manual processes. Our test use case was to start from fresh after 20 years of experience and evolution and just start over. I was given the opportunity to build a data strategy, a three-year plan where we build a repository of all sources of truth data used in governance. We have our mapping, our design, our data linkage, principles, business rules, and data stewardship program. Three years later, here we are.
The three big areas that we use it for right now: metadata management as a whole, versioning of metadata, and metadata mappings and automation. We have started to adopt data profiling from this tool, but it is an ongoing process. I will be adding these capabilities to my team probably in Q1 of this year.