Control-M supports a lot of business processes. It supports some of the HR functions. I don't know if payroll is directly supported, but we do run jobs through PeopleSoft, which obviously impacts HR. Recently, we've started using the SAP module. So, we're making a transition from PeopleSoft to SAP, and I also see some payroll functions happening there.
We use Control-M to orchestrate a diverse landscape of vendor products such as Pega, MuleSoft, etc. File transfers and data feeds fetching are quite important for us. So, a lot of data processing happens through Control-M.
Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all of our application workflows and data pipelines. Of course, such a diverse landscape requires you to make the effort to utilize Control-M to tie everything together or to act as the glue. Once you do that, everything is clearly defined, and you can view these disparate systems using one unified pane. If you don't define it correctly, then obviously Control-M won't have that insight, and so you'll have to go to multiple locations to go look at your job statuses.
We use its web interface. It is primarily for the application support teams to go monitor their own jobs. The jobs defined within Control-M are tightly controlled by a specific group of people. There are also people who need access to view that the jobs were completed successfully or why the jobs may have failed. These people are given access through Control-M web to view and monitor the jobs that they support or the applications they support. They're usually able to log on without having to install any client on their personal workstations. So, it's quite convenient. We have not implemented its mobile interface.
The integrated file transfers with our application workflows have certainly sped up our business service delivery by 80%. It has allowed the business to integrate file transfers more readily. Prior to utilizing the Control-M module, people had to write their own file transfer scripts in a scripting language of their choice to vary degrees of effectiveness. With the integrated File Transfer solution within Control-M, there is a standardized way of performing file transfers along with the capability of file watching and grabbing the file names that were transferred, making it much more versatile.
Control-M can immediately report when a job fails. If you have proper monitoring in place, you're notified immediately when your business flows are impacted. In the past, when you run jobs using Cron or just wrote shell scripts, you're really left in the dark because they don't necessarily report even from within Control-M. Implementing Control-M has made the business realize how critical and important it is to have proper error coding within the scripts that they schedule. If the scripts don't necessarily report any errors or redirect the system output into log files, when a job fails, there is no way to detect that.
We've automated many time-consuming business reports and other things that were very manual and took a tremendous amount of manhours. We've also automated a lot of maintenance using Control-M. We've integrated with Ansible Tower. So, we now are able to run Ansible playbooks and Ansible job templates. With the scheduling capability and the multitude of integrations that Control-M offers, it really acts as the unifying glue and as a communicator and orchestrator across the enterprise. With Ansible Tower, you can run a number of playbooks through it to perform patching and reboots and whatever maintenance that the infrastructure teams require, but you can't really do it when the business is still operating, or you can't do it when that business is operating, but you could do it for another business that's not operating at the moment. It is very hard to coordinate that without knowing which lines of business have jobs running or things like that. With Control-M, you can see that and you can actually enact workload policies to put jobs on hold prior to running Ansible playbooks. Once your Ansible playbook is complete, you can release the jobs again by deactivating the workload policies. So, it makes those processes very streamlined.
We do use the Role-Based Administration feature. We have been allowing other groups to gain more control over their agents so that they can define connection profiles, and they can do a little bit more on their side without inundating the main team with a lot of tasks. Everybody is happier. They can get things done faster, and they have immediate feedback and response because they're in control. The main Control-M team is not inundated with a lot of different requests from various teams to do a number of mechanical tasks. They don't get asked to create the connection profile for a database. People have all the information there, and they can do it themselves. They can define it in a way so that only they have access to it.
It has helped us to achieve faster issue resolution. Control-M reports on the error. It is easier to view the system output of that job. Whether it is an Informatica job, a scripted job, or a database job, it is easier to go in and view the issue and then troubleshoot from there. Most of the time, you can be running from the point of failure if the jobs aren't defined correctly. In a properly defined job, I would estimate that there is a 70% to 90% reduction in the meantime to resolution.
It has helped us by improving our service-level operations performance. We've built integration between Control-M and our ITSM, which is ServiceNow, and that has certainly allowed us to gain more visibility within our community through ServiceNow. Every time a production job fails, an incident ticket is cut, and that's highly visible. That needs to be escalated too, and there is a much more defined process to be able to resolve that issue. In the past, obviously, when you didn't have that level of visibility or that integration, there was always time lost in identifying what the issue is.