With Document Understanding, we have been able to process both structured and unstructured documents. It does not matter whether a document is structured or unstructured. The only thing is that data should be concise, and it should be constant. If we are getting 70% unstructured data and 30% structured data, we are good to go, but we should be aware of how much structured and unstructured data we are getting. If we get a picture, then based on that, we serialize them. It is either a standardized process, or we have to use some APIs or some logic to make it structured. We initially filter out based on the picture view. If the visibility of the data is less than 45% or 65%, it means that the data is not as structured. We then move it to a different folder to process it later. If it is standard and structured, we process it immediately. We do not need to worry about the chunks. There is a positive output in our hands when we have achieved 45% or 65% of our target. We can then work on the remaining part to make it more centralized, so it is a bit easier for us. With Document Understanding, we are able to handle things like varying document formats, handwriting, and signatures. The approach we take depends on the nature of the data that we are getting. For example, a requirement from the insurance company was to mandatorily verify whether the source is authentic or not. They had metrics at their end to say who were the legal brokers and who were not legal brokers. It was not challenging for us there to extract that data from their backend because they already had all the information. We just used their APIs. We just read the data out and compared the data from there. In terms of human validation required for Document Understanding output, we needed to finalize if the data coming from Document Understanding was correct or not. If it was not correct, we moved it to the process folder. As we marked it as incorrect, it asked us the exact location that we were looking for to get, for example, the grand total. We defined that, and then it got stored in its knowledge base system, and then it got processed. It can be processed as an attended bot or as an unattended bot. It totally depends on how much data or knowledge it has been gaining from humans, and day by day, with more knowledge, it becomes more capable of processing the data independently. The average handle time depends on the number of cores that the operating system has. If you have 14 to 16 cores CPU in your machine, 3 minutes would be required to process a 3 MB file. It also depends on the number of pages or the complexity. If data visibility is clear and the page number is not more than five, it can process the file in 3 minutes. After automating the process with Document Understanding, it takes two minutes to process a single PDF. I do not have the exact data of how much time humans used to take. They were probably putting in nine hours per day, and after automating the process with Document Understanding, they are putting in two hours per day, so they are saving seven hours per day. Monthly, there is a saving of 150 hours. In terms of error reduction, in the beginning, we were getting a lot of machine errors, but as the process got smoother and the knowledge base system stabilized, the machine errors reduced, and the human errors also reduced. Document Understanding helped free up the client's staff’s time for other projects. Before automation, they had seven people on their team, and after automating the process, they cut their budget and reduced the manpower from seven to four. They were able to free three staff members for other projects. They saved 35% to 45% of manpower.