What is our primary use case?
We use IBM Datacap for data compaction on several different document types. Users can scan the documents and they can upload them to the backend repository where they're stored.
There are a least seven types of applications. For example, one of our clients is the department of human services in Hawaii. They need to know when someone qualifies for financial assistance if they are elderly, pregnant, etc.
Now I handle data capture, and IBM Datacap is part of my current project. They're using Datacap as a scanner portal for connecting and scanning the data capture, setting up indexes, etc.
My product is an online eligibility system. The user can check their eligibility by filling out that application form. They can fill out that application and enter the necessary information, including the supporting documents like adoption documents, degree verification, etc.
They upload the requirements to show they are eligible for renewal. Datacap helps them select the application type, and there is a barcode index form. Datacap performs a step-by-step classification and verification process. It will go through each classification step and verify the data. In the end, it exports the data to the correct repository. Various types of documents are there, so the user can select one and upload it to the document index. All documents will be stored in that data system. We can use them anywhere.
What is most valuable?
Datacap's content navigator, which is the UA for end-users, is great. The IBM content navigator is highly developed with many features. We can customize it by adding different features. When someone scans something we can see who scanned it and get notifications.
Our technical team has access to the serial CRMS so they can check to see if there are any applications. Once they hand it to the Datacap portal, they can continue the application using feedback CRMS to send it. On the Datacap side, multiple people we can view that, and we can set the validation test.
We can scan thousands of documents. Our on-site team has set up a UIT system, and you can scan tons of documents fast. There is also a system that captures whatever data we enter. Datacap automatically scans why we write with pen and paper and captures everything. We also get reports on the scanned and exported data that is loaded into the system.
Datacap's performance has improved over the years. It's fast even with a hardware printer and scanner. It only takes seconds to capture a document, so we can process thousands of documents quickly.
I can have all scanners accessible from my end. I can verify whether the scanners are up to them on whether roles have been added to them or not.
We can see the panel for application-type examples. There are input fields where users can enter their input data. The same data will be added to the database easily. That is integrating with the Oracle database to store content, including PDFs. Everything fits.
There are queues and we can get everything from the database. There's a manual maintenance tool that we can use for batch deletion or reporting. We can put them into Excel. I've written automation scripts for this and I can upload them. It's not robot process automation we are using. Rather, it's the framework automation we are using. We work on a source tool, which is based on tech automation technology.
What needs improvement?
The IBM Datacap site actually is on the newer inside. They will give it as a plugin only. They'll be giving it as a package completely, however, the plugin for the ICN, we can deploy on the ICN desktop. The ICN desktop is a UA. One is an admin and one is a normal user. There's a user interface when we log into that admin desktop. We can see and deploy all these kinds of plugins for data as well. We can deploy new features and all for the backend installations. Right now, we can just keep the backup, and the existing data, and we can reuse the newer version and we can reinstall it.
For how long have I used the solution?
I have three and a half years of experience on Datacap, excluding my training on POD.
What do I think about the stability of the solution?
There are issues with stability. I haven't seen much, however, there was a time when we scanned and, after scanning, sent it to the repository we are integrated with. Sometimes the date is wrong. By the time we were sending a default time, we had a discussion with the client there were customizations needed.
The document code does not extend. I have to rewrite the code again.
However, one database supports another.
How are customer service and support?
I've met the PML team. I met the IBM team. I worked with IBM and have no trouble pinging them when I have questions. If the code is not supporting the customization when I'm doing it, I can reach out
The final system there, the Datacap system is sending all documents into file. However, when export is happening, there is sometimes a format issue. We reached out and they had released a new version, we got an encryption issue with the PDF. They helped us through it. Eventually, we got it fixed. I got what I needed out of support.
Which solution did I use previously and why did I switch?
I previously worked with content management solutions, however, I didn't deal with scanning products. Now, I'm working on Datacap. I didn't have much chance to work on Captiva or Kofax as I was working on multiple projects as the final developer.
How was the initial setup?
Datacap is not that difficult to set up, however, there are some limitations. For example, it's supposed to only be used with Windows. It'll not support a Linux platform and has been built on top of .Net technologies. For C#, we have to wait for some code, for example. The code itself is limited. Basically, I'm a Java developer. The finance support completely Java. .Net has been somewhere else. I never see any development with technology using .Net, yet Datacap is completely on top of .Net only .Net and C# files. We have tried the code in that way only.
The scope is not up to that mark. Coming in the future, however, there may be the open-source side.
In terms of deployment, for us, the planning was already done in a minor version only, with completely new software. We planned it for three months with more than one month of buffer on offer. It has been a long process, however, otherwise, you can complete it, including unit testing, within three months.
What's my experience with pricing, setup cost, and licensing?
The solution comes as a part of a bundle package. Licensing is hard to calculate. There's no real difference between the cloud and on-prem. The Kubernetes OpenShift to cloud pack is a different process.
It is expensive only due to the fact that when we take it with FileNet, it is expensive.
What other advice do I have?
I'm a customer.
So far, we are exclusively using Datacap on the cloud. The infrastructure team handles that from their end. Previously we had version 9.053 installed in an on-premises environment. Since then, we have installed it on the Nutanix cloud, However, the process is still ongoing. They have only installed it on VMs only, but they provide me access for development and editing.
I do not yet have approval for the SIT. I'm doing the all activities in the development enrollment. As I understand it, the IBM BAW is completely cloud-backed automation and there is an ES OpenShift to pack also. Now, we are maintaining these files along with the Datacap.
I can suggest to potential users to try it. Datacap as compared to other products is a cheaper product. The stability and all the features are great. The user-friendly features look good and you have less customization. They have given us almost 80% of the features we need along with IBM Datacap.
I'd rate the solution eight out of ten.
*Disclosure: I am a real user, and this review is based on my own experience and opinions.