What is our primary use case?
We use erwin to design conceptual, logical, and physical data models for new projects. We use a Forward Engineering tool to forward engineer data models into new database structures. We use the reverse engineering tool to bring databases into data models and erwin. We also generate HTML reports of the models to share with our customers.
Whenever we do have a new project that requires a new approach, we do try using erwin for it. For example, if we have an XSD message file, then we would try to see if there is a way to get that into erwin for better visibility of the structures that we have to work with.
How has it helped my organization?
The product has helped us standardize our data modeling efforts across the enterprise in regards to visuals and naming. We also use the Mart Tool from erwin, which allows us to store our data models in a centralized repository, which gives everyone visibility on what is out there and how it is all related.
We discuss existing and new business requirements with business users, data architects, and application developers to figure out how to capture and visualize concepts in their relationships. One thing we do have standard in all of our models is that we use the information engineering notation. This is standard across our enterprise. We do use a diagram hierarchical layout to help visualize things, especially when we reverse engineer a database, as we want to have some sort of a clear visual layout of things.
What is most valuable?
We find a few of erwin tools most valuable:
- The Bulk Editor lets us easily make a lot of similar changes within our data model.
- We use the Forward and Reverse Engineering tools to help us speed things up and create things that would have to be done otherwise by hand. E.g., getting a database into a data model format or vice versa.
- The Report Designer is extremely useful because we can create reports to share with our business users and have a business discussion with them on how things work.
We find the text manipulation through the Bulk Editor to be extremely helpful. There were times where we had a set of entities which were not following our standards. With the help of the Bulk Editor, we were able to form those names with a few Excel formulas to follow our standards.
The Reverse Engineering functionality is good and easy to follow. It works really well. For the most part, we have been able to get any database to work with our data model format.
We quite heavily use the templates that exist to apply our standards to the data models created by our data modelers. We are able to use the templates to apply things like Naming Standards, casing on names, and colors to all our data models without having to be on top of it.
What needs improvement?
Complete Compare is not user-friendly. For example, the save known changes as snapshot does not work as expected. We are unable to find the exported files in our workstations at times. Complete Compare is set up only to compare properties that are of interest to us, but some of the differences cannot be brought over from one version of the model to another. This is despite the fact that we are clicking to bring objects from one place to another. Therefore, it's hard to tell at times if Complete Compare is working as intended without having to manually go into the details and check everything. If it could be redesigned to a degree where it is easier to use when we bring things over from one site to another and be sure that it's been done correctly, that would be nice to have. We would probably use the tool more often if the Complete Compare were easier to use.
The client performance could be improved. Currently, in some cases, when we delete entities it causes the program to crash. Similarly, for Mart's performance, we need to reindex the database indexes periodically. Otherwise browsing through the Mart, trying to open up or save a data model takes unusually long.
There are several bugs we discovered. If those were fixed, that would be a nice improvement. We encounter model corruption over time, and it is one of those things that happens. There is a fix that we run to repair this corruption by saving the model as an XML file or to the Complete Compare tool. If this process could somehow be automated, having erwin detect when a model is corrupted and do this process on its own, that would be helpful.
There are several Mart features that could be added. E.g., a way to automatically remove inactive sessions older than a specified date. This way we can focus on seeing which users have been utilizing our central repository recently, as opposed to seeing all of what happened since five years ago. This would be less of a problem if the mart administrator did not have trouble displaying all of the sessions.
On the client side, there are some features that would come in handy for us, e.g., Google Cloud Platform support or support for some of the other cloud databases.
If we had a better way to connect and reverse engineer the databases into data models, that would help us.
Alter scripts can be troublesome to work with at times. If they can be set up to work better, that would help. On the Forward Engineering side of things, by default, the alter syntax is not enabled when creating alter scripts. We strongly believe this is something that should be enabled by default.
On the Naming Standards (NSM) side of things, there is a way in erwin to translate logical names into physical names based on our business dictionary that we created. However, it would be nice if we could have more than one NSM entry with the same logical element name based on importance or usage. Also, if erwin could bring in the definitions as part of the NSM and into a model, then we could use those definitions on entities and attributes. That would be beneficial.
For how long have I used the solution?
We have been using it for at least 15 years, a very long time.
What do I think about the stability of the solution?
Overall, the server is mostly stable. After we implemented the reindexing fix on our database, everything works pretty well. On the client side, it is mostly stable, but sometimes it's not. There are certain actions that cause the client to crash. This has been much less of the case since we switched to the 64-bit version of erwin, which has been a great improvement.
We have found erwin’s code generation ensures accurate engineering of data sources. We haven't seen any issues. We pass our code off to DBAs to implement. Therefore, the DDL that we generate gets passed up to the DBAs who will add some physical features and may add some performance indexes, then we will reverse engineer that information and have that in our data models.
For our bug related issues, we have been given the recommendation to upgrade to the latest version. We are in process of doing that and will see how that works out. We also submitted some other things through erwin's idea board. There are a few issues that we haven't reached out to erwin on yet.
Currently, we have a team of people who take turns helping out other users. They figure out how to do different things. If there is a server side issue, we do have several people as well who will look into that. In the past, we did manage a lot with one person. However, we realized it was quite an undertaking. You either need one fully dedicated person to look into this or several people to take turns.
We have a Windows Server and a SQL Server database. Therefore, we have SQL Server dedicated staff to help us with any SQL Server issues and Windows support staff who help us with any Windows issues. We don't generally have any issues with erwin. From a technical support side, we do have a support staff if we were to run into any issues. Our team of five data modelers are pretty well-experienced with both the tool, Mart, and any sort of communication issues that we might have to deal with, e.g., if the SQL server went down, then these folks would be the liaisons to the SQL Server team.
What do I think about the scalability of the solution?
Given our mostly constant user base and constant growth of new data, our impressions of the scalability are great. Currently, we have about 2000 models in the Mart repository. Reaching this capacity has slowed down interactions with the Mart as opposed to when we had a fresh Mart. When we first started using the Mart server, it took about two seconds to open things like the Catalog Manager or Mart Open dialogue. Now, it takes around 10 seconds to do that part. For the most part, it seems to be pretty scalable. We've been able to continue using the tool given our large volume of models.
There are 35 to 40 users plus some occasional DBAs who use it to tweak any of the DDLs that they might want to pull.
We are able to develop our data models for mission-critical tasks with the solution’s configurable workspace and modeling canvas. We have 20 enterprise data modelers. We are mostly working on the standard RDBMSs: SQL Server, Db2, and Oracle. We also use some cloud technologies, like GCP, Azure, and Couchbase. Then, there are approximately another 15 data modelers which work exclusively in Oracle Business Intelligence from a data modeling aspect. This is for dimensional repository and data warehouse stuff. Therefore, we have about 35 to 40 data modelers in our organization for pretty much every major project that passes some sort of funding gate. Anything that is mission-critical for our organization will come through one of our two managers, depending on whether it's relational modeling or dimensional modeling. All of the database designs come through these two groups. There are some smaller database designs which we may not be involved with, but all of the critical application work comes through these teams. In regards to focusing on mission-critical tasks, we really wouldn't be able to do it without a tool like erwin. Since we are all very well-trained in erwin, it is the tool that we leverage to do this.
Erwin generates the DDL for all our projects. We rely on the tool for accuracy as some of our projects have hundreds of entities and tables.
How are customer service and technical support?
When it is bug related, we get a bug fix or are told to upgrade to the latest version. This has worked out in the past. Where it is question related, we have been pretty happy with their Tier 1 support's responses. We will receive some sort of a solution or suggestion on how to proceed in a very timely manner.
We would like support for JSON reverse engineering. That is something which is completely missing, but is something we have been working with quite often recently. If erwin could support this, that would be incredible.
How was the initial setup?
On the client side, the setup was mostly straightforward. It was a matter of going through the installer, reading a little bit, then proceeding to the next step. In the end, the installation was successful.
On the server side, it has been a bit more complex. We did have some documentation provided by erwin, but it wasn't fully intuitive nor step-by-step. Some things were missing. It was enough to get started, then figure things out along the way.
On the client side, it takes five to 15 minutes to do the installation or upgrade to a newer version. On the server side, from the moment we backed up everything on the server and disabled the old mart application, the upgrade took about two hours. If you include all the planning, testing, and giving support users enough time to do everything, the upgrade took about three months. In general, these are the timeframes we experienced through in the past.
What about the implementation team?
We simply used the documentation provided by erwin. Between the few of us that worked on the upgrade at our company, we had enough of a technical background to be able to figure out things out on our own. There were five to 10 people who worked on this initially:
- We had one person who helped with the database side of things.
- We had another person do everything on the application server.
- To test out of the different features of erwin in the new version and ensure that the existing features worked as intended, we involved several additional people from our team.
We go through a pretty rigorous testing procedure when we bring in a new release of any software like this. Although it's not affecting customers directly, it certainly affects 35 to 40 people. Therefore, we want to ensure that we do not mess them up by not having something work. Normally, we go through this with any product. We first install it on a test environment and have a bunch of folks jump on. This is to ensure everything is working the way we want and work out all the kinks when setting up the production server before we move it into production.
What was our ROI?
It is an invaluable tool for us. It has been part of our data governance process in regards to database design for at least 15 years.
The amount of time saved is proportional to the amount of changes in the databases that we are implementing at any time. The more code we generate (because the model is bigger), that saves us more time because we don't have to write everything up manually and check to make sure that the code is correct. If we had to give a number, this saves us anywhere from minutes to hours of work. The time frame depends on the data modeler, as some data modelers generate more code than others. Therefore, it could be on a daily, weekly, or monthly basis and depends on the project. Some projects are in maintenance mode and not going through a lot of changes. It is way easier to use this solution because then we have a data model to reference for something that was developed approximately two months ago and somebody can just pick it up versus if someone had to generate changes to a database without a data modeling tool.
The tool certainly makes the data modeling staff more productive than if they did not have a similar tool. Without erwin, our jobs would be a lot more tedious and take a lot more time.
Which other solutions did I evaluate?
We evaluated IDERA two years ago and decided to stay with erwin mostly because the staff is mostly familiar and comfortable with the tool. We think that was the overriding factor. The other thing would be converting from erwin to IDERA would be a major undertaking that we just weren't prepared to do.
The fact that it can generate DDL is a major advantage over something like Visio, where you can also do a database diagram. We don't have a Visio version that would generate DDL, so I'm assuming it doesn't, and any tool that can generate code for database definition will certainly have an advantage over a product that doesn't.
What other advice do I have?
I would certainly recommend this product to anyone else interested in trying it out. The support from the vendor is great. The tool overall performs well and is a good product to use.
Having a collaborative environment such as the one that erwin provides through the Mart is extremely beneficial. Even if multiple people aren't working on a single model, it's nice to have a centralized place to have all the models. It gives us visibility and a central place to keep everything in one place. Also, it supports versioning, which allows us to revisit it at different points in time to go back to in the model, which is really helpful.
We do not use erwin to make changes directly to the database.
We have no current plans to increase our usage of erwin other than adding more models.
We would rate the solution overall as an eight (out of 10).
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.