What is our primary use case?
The use case was normally to update data model designs for transaction processing systems and data warehouse systems. Part of our group also was doing data deployment, though I personally didn't do it. The work I did was mostly for the online transaction systems and for external file designs.
I didn't use it for data sources. I used the solution for generation of code for the target in the database. Therefore, I went from the model to the database by generating the DDL code out of erwin.
We had it on-premise. There was a local database server on SQL, then we each had a client that we install on our machines.
How has it helped my organization?
At one of my previous jobs, we had a lot of disparate databases that people built on their PCs, which were under their desk. We were under a mandate to bring all of that into a controlled environment that our DBAs could monitor, tune, etc. Therefore, this was a big improvement. I put the data that was in whatever source into an Excel spreadsheet, reverse engineering it into a SQL file and putting in the commas, and then I could reverse engineer that SQL into a data model. That saved us a tremendous amount of time instead of building the data model from scratch.
I educated a number of my colleagues who were in data architecture and writing the DDL by hand. I showed them, "You do it this way from the model." That way, you never have to worry about introducing errors or having a disconnect between what is in the model and the database. I was able to get management support for that. We enhanced the accuracy of our data models.
What is most valuable?
I do like the whole idea of being able to identify your business rules. In my last position, I got acquainted with using it for data lineage, which is so important now with the current regulatory environment because there are so many laws or regulations that need to be adhered to.
If you're able to show where the data came from, then you know the source. For example, I was able to use user-defined properties (UDPs) on one job where we were bringing in the data from external XML files. I would put it at the UDP level, where the data came from. On another job, we upgraded a homegrown database that didn't meet our standards, so we changed the naming standards. I put in the formally known UDPs so I could run reports, because our folks in MIS who were running the reports were more familiar with the old names than the new names. Therefore, I could run the report so they could see, "This is where you find what you used to call X, and it is now called Y." That helped.
The generation of DDL saved us having to write the steps by hand. You still had to go in and make some minor modifications to make it deployable to the database system. However, for the data lineage, it is very valuable for tracing our use of data, especially personal confidential data through different systems.
Complete Compare is good for double checking your work, how your model compares with prior versions, and making sure that your model reflects the database design. At my job before my last one, every now and then the DBAs would go in and make updates to correct a production problem, and sometimes they would forget to let us know so we could update the model. Therefore, periodically, we would go in and compare the model to the database to ensure that there weren't any new indexes or changes to the sizes of certain data fields without our knowing it. However, at the last job I had, the DBAs wouldn't do anything to the database unless it came from the data architects so I didn't use that particular function as much.
If the source of the data is an L2TP system and you're bringing it into a data warehouse, erwin's ability to compare and synchronize data sources with data models, in terms of accuracy and speed, is excellent for keeping them in sync. We did a lot of our source to target work with Informatica. We used erwin to sometimes generate the spreadsheets that we would give our developers. This was a wonderful feature that isn't very well-known nor well-publicized by erwin.
Previously, we were manually building these Excel spreadsheets. By using erwin, we could click on the target environment, which is the table that we wanted to populate. Then, it would automatically generate the input to the Excel spreadsheet for the source. That worked out very well.
What needs improvement?
When you do a data model, you can detect the table. However, sometimes I would find it quicker to just do a screenshot of the tables in the data model, put it in a Word document, and send it to the software designers and business users to let them see that this is how I organized the data. We could also share the information on team calls, then everybody could see it. That was quicker than trying to run reports out of erwin, because sometimes we got mixed results which took us more time than what they were worth. If you're just going in and making changes to a handful of tables, I didn't find the reporting capabilities that flexible or easy to use.
The report generation has room for improvement. I think it was version 8 where you had to use Crystal Reports, and it was so painful that the company I was with just stayed on version 7 until version 9 came out and they restored the data browser. That's better than it was, but it's still a little cumbersome. For example, you run it in erwin, then export it out to Excel, and then you have to do a lot of cosmetic modification. If you discover that you missed a column, then you would have to rerun the whole thing. Sometimes what you would do is just go ahead and fix it in the report, then you have to remember to go back and fix it in the model. Therefore, I think the report generation still could use some work.
I don't see that it helped me that much in identifying data sources. Instead, I would have to look at something like an XML file, then organize and design it myself.
For how long have I used the solution?
I started working with Data Modeler when I was in the transportation industry. However, that was in the nineties, when it was version 1 and less than $1,000.
What do I think about the stability of the solution?
I found it pretty stable. I didn't have any problems with it.
Sometimes, when you're working with model Mart, once in a while the connection would drop. What I don't like is that if you don't consistently save, you could lose a lot of changes. That's something that I think should work more like Word. If for some reason your system goes down, there's an interruption, or you just forget or get distracted by a phone call, then you go back and something happened. You might have lost hours worth of work. That was always painful.
What do I think about the scalability of the solution?
I have worked on databases that had as many as a thousand tables. In terms of volume and versioning, it is fine. We've used the model Mart to house versions that introduce another level of complexity to keep the versioning consistent.
There is a big learning curve with using model Mart. Therefore, a lot of groups don't really fully utilize it the way they should. You need somebody to go in there every now and then to clean things up. We had some pretty serious standards around when you deployed it to production and how you moved it in model Mart. We would use Complete Compare there. It scaled well that way.
In terms of the number of users, we had 20 to 30 different data architects using it. I don't know that everybody was on it full-time, all the time. I never saw a conflict where we were having trouble because too many people were using it. From that point, it was fine.
I think the team got as large as it was going to get. In fact, right now they're on a hiring freeze because of COVID-19.
How are customer service and technical support?
Over a period of five or 10 years, the few times I've had to go all the way through to erwin, I talked to the same young lady, who is very good. She understood the problem, worked it, and would give me the solution within two phone calls. This was very good.
Which solution did I use previously and why did I switch?
Prior to erwin, I had used Bachman and IEF. Bachman I liked better, but IEF was way too cumbersome.
Bachman was acquired by another company and disappeared from the marketplace. The graphics were very pretty on Bachman. Its strongest feature was reverse engineering databases. I found erwin just as robust with its reverse engineering.
IEF also disappeared from the marketplace, and I didn't use it very much. I didn't like it, as it was way too cumbersome. You needed a local administrator. It was really tough. It promised to generate code and database as well as supposed to be an all encompassing case tool. I just don't think it really delivered on that promise.
It could very well be that the coding of those solutions didn't keep up with the latest languages. There was a real consolidation of data modeling tools in the last 15 to 18 years. Now, you've only got erwin and maybe Embarcadero. I don't think there's anything else. erwin absorbed a lot of the other solutions but didn't integrate them very well. We were suffering when it didn't work. However, with the latest versions, I think they've overcome a lot of those problems.
How was the initial setup?
Usually, the companies already had erwin in place. We had one company where the DBAs would sort of get us going.
The upgrades were complex. They required a lot of testing. About a year ago, we held off doing them because we wanted to upgrade to the latest version as well as we were in the midst of a very big system upgrade. Nobody wanted to take the time. It took one of our architects working with other internal organizations, then there were about three or four of us who tried to do the testing of the features. It was a big investment of time, and I thought that it should have been more straightforward. I think companies would be more willing to upgrade if it wasn't so painful.
The upgrade took probably two months because nobody was working on it full-time. They would work on it while they could. One of the architects ended up working late, over the weekends, and everything trying to get it ready before we could roll it out to the entire team.
For the upgrades, there were ;at least half a dozen people across three different groups. There were three or four data architects in our group, then we had two or three desktop support and infrastructure people for the server issues.
What about the implementation team?
I think they used Sandhill for the initial installation.
If it's the first time, I recommend engaging a third-party integrator, like Sandhill, whom I found them very good and responsive.
What's my experience with pricing, setup cost, and licensing?
We always had a problem keeping track of all the licenses. All of a sudden you might get a message that your license expired and you didn't know, and it happens at different times. At GM Finance, they engaged Sandhill to help us manage it. I was less involved because of the use of Sandhill, who was very helpful when we had trouble with our license. I remember you had to put in these long string of characters and be very careful that you didn't cut and paste it in an email, but that you generated it. It was so sensitive and really difficult until the upgrades.
if there was a serious problem, then it was usually around the licensing, where there was some glitch in the licensing. Then, we would call Sandhill who would help us out with it. That's something where we had to invoke a third-party for any technical difficulties.
I wish it wasn't so expensive. I would love to personally buy a copy of my own and have it at home, because the next job that I'm looking at is probably project management and I might not have access to the tool. I would like to keep my ability to use the tool. Therefore, they should probably have a pricing for people like me who want to just use the solution as an independent consultant, trying to get started. $3,000 is a big hit.
I think you buy a block of users because I know the company always wanted to manage the number of licenses.
Which other solutions did I evaluate?
I really haven't spent a lot of time on other data modeling tools. I have heard people complain about erwin quite a bit, "Oh, we wish we had Embarcadero," or something like that. I haven't worked with those tools, so I really can't say that they're better or worse than erwin, since erwin is the only data modeling tool that I've used in the last 15 years.
What other advice do I have?
There might be some effort to do some cloud work at my previous place of employment, but I wasn't on those projects. I don't think they've settled on how they're going to depict the data.
Some of the stuff in erwin Evolve, and the way in which it meshes with erwin Data Modeler, was very cool.
Sometimes, your model would get corrupted, but you could reverse engineer it and go back in, then regenerate the model by using the XML that was underlying the model. This would repair it. When I showed this to my boss, he was very impressed. He said, "Oh man, this is where we used to always have to call Sandhill." I replied, "You don't have to do that. You need to do this." That worked out pretty well.
Biggest lesson learnt: The value of understanding your data in a graphical way has been very rich in communicating to developers and testers when they recognize the relationships and the business rules. It made their lives so much easier in the capturing of the metadata and business English definitions, then generating them. Everybody on the team could understand what this data element or group of data elements represented. This is the biggest feature that I've used in my development and career.
I would rate this solution as an eight out of 10.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.