What is our primary use case?
Control-M supports a lot of business processes. It supports some of the HR functions. I don't know if payroll is directly supported, but we do run jobs through PeopleSoft, which obviously impacts HR. Recently, we've started using the SAP module. So, we're making a transition from PeopleSoft to SAP, and I also see some payroll functions happening there.
How has it helped my organization?
We use Control-M to orchestrate a diverse landscape of vendor products such as Pega, MuleSoft, etc. File transfers and data feeds fetching are quite important for us. So, a lot of data processing happens through Control-M.
Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all of our application workflows and data pipelines. Of course, such a diverse landscape requires you to make the effort to utilize Control-M to tie everything together or to act as the glue. Once you do that, everything is clearly defined, and you can view these disparate systems using one unified pane. If you don't define it correctly, then obviously Control-M won't have that insight, and so you'll have to go to multiple locations to go look at your job statuses.
We use its web interface. It is primarily for the application support teams to go monitor their own jobs. The jobs defined within Control-M are tightly controlled by a specific group of people. There are also people who need access to view that the jobs were completed successfully or why the jobs may have failed. These people are given access through Control-M web to view and monitor the jobs that they support or the applications they support. They're usually able to log on without having to install any client on their personal workstations. So, it's quite convenient. We have not implemented its mobile interface.
The integrated file transfers with our application workflows have certainly sped up our business service delivery by 80%. It has allowed the business to integrate file transfers more readily. Prior to utilizing the Control-M module, people had to write their own file transfer scripts in a scripting language of their choice to vary degrees of effectiveness. With the integrated File Transfer solution within Control-M, there is a standardized way of performing file transfers along with the capability of file watching and grabbing the file names that were transferred, making it much more versatile.
Control-M can immediately report when a job fails. If you have proper monitoring in place, you're notified immediately when your business flows are impacted. In the past, when you run jobs using Cron or just wrote shell scripts, you're really left in the dark because they don't necessarily report even from within Control-M. Implementing Control-M has made the business realize how critical and important it is to have proper error coding within the scripts that they schedule. If the scripts don't necessarily report any errors or redirect the system output into log files, when a job fails, there is no way to detect that.
We've automated many time-consuming business reports and other things that were very manual and took a tremendous amount of manhours. We've also automated a lot of maintenance using Control-M. We've integrated with Ansible Tower. So, we now are able to run Ansible playbooks and Ansible job templates. With the scheduling capability and the multitude of integrations that Control-M offers, it really acts as the unifying glue and as a communicator and orchestrator across the enterprise. With Ansible Tower, you can run a number of playbooks through it to perform patching and reboots and whatever maintenance that the infrastructure teams require, but you can't really do it when the business is still operating, or you can't do it when that business is operating, but you could do it for another business that's not operating at the moment. It is very hard to coordinate that without knowing which lines of business have jobs running or things like that. With Control-M, you can see that and you can actually enact workload policies to put jobs on hold prior to running Ansible playbooks. Once your Ansible playbook is complete, you can release the jobs again by deactivating the workload policies. So, it makes those processes very streamlined.
We do use the Role-Based Administration feature. We have been allowing other groups to gain more control over their agents so that they can define connection profiles, and they can do a little bit more on their side without inundating the main team with a lot of tasks. Everybody is happier. They can get things done faster, and they have immediate feedback and response because they're in control. The main Control-M team is not inundated with a lot of different requests from various teams to do a number of mechanical tasks. They don't get asked to create the connection profile for a database. People have all the information there, and they can do it themselves. They can define it in a way so that only they have access to it.
It has helped us to achieve faster issue resolution. Control-M reports on the error. It is easier to view the system output of that job. Whether it is an Informatica job, a scripted job, or a database job, it is easier to go in and view the issue and then troubleshoot from there. Most of the time, you can be running from the point of failure if the jobs aren't defined correctly. In a properly defined job, I would estimate that there is a 70% to 90% reduction in the meantime to resolution.
It has helped us by improving our service-level operations performance. We've built integration between Control-M and our ITSM, which is ServiceNow, and that has certainly allowed us to gain more visibility within our community through ServiceNow. Every time a production job fails, an incident ticket is cut, and that's highly visible. That needs to be escalated too, and there is a much more defined process to be able to resolve that issue. In the past, obviously, when you didn't have that level of visibility or that integration, there was always time lost in identifying what the issue is.
What is most valuable?
The File Transfer component is quite valuable. The integration with products such as Informatica and SAP is very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit. The file watcher component is also indispensable when integrating with other applications that generate files, instead of triggering a workflow based on time.
What needs improvement?
We have been experimenting with centralized connection profiles. There are some bugs to be worked out. So, we don't feel 100% comfortable with only using centralized connection profiles. We do have a mix of control on agents out there, which leads to some complications because earlier agents do not support centralized connection profiles.
A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions. One particular example is that we were trying to use the Automation API to fetch a number of run ads users from the environment. The username had special characters and backspace characters because it was a Windows User ID. In the documentation, there is a documented workaround for that. However, that relied on two particular settings in the Tomcat web server. I later found out that these settings work out-of-the-box for version 9.0.19, but those two options were not included in the config file for 9.0.20. So, it led to a little bit of confusion and a lot of time trying to diagnose, both with support and the BMC community, what is the issue. Ultimately, we did resolve that, but that is time spent that really shouldn't have been spent. It had obviously been working in 9.0.19, and I don't know why that was missed in 9.0.20, but that's a primary example of an improvement that can happen.
We've also noticed that the Control-M agents themselves now run Java components. Over time, they tend to destabilize. It could be because garbage collection isn't happening, or something is not happening. We then realize that the agent is consuming quite a large amount of memory resources on the servers themselves. After recycling the agents and releasing that memory, things go back to normal, but there are times when the agent becomes unresponsive. The jobs get submitted, and nothing executes, but we don't know about it until somebody says, "Hey, but my job isn't running." When we look at it, it says Executing within the GUI, but there is no actual process running on the server. So, there is some disconnect there. There is no alerting function or the agent there that says, "Hey, I'm not responding." It is not showing up in the x alerts or anything like that.
The integrated guides have not been that helpful to us. I do find a lot of the how-to videos on the knowledge portal to be useful. However, there are some videos where the directions don't always match with some of the implementations. There are some typos here and there, but overall, those have been more helpful for us.
Its pricing and licensing could be a little bit better. The regular Managed File Transfer piece, is a little overpriced, especially for folks who already have licensed Advanced File Transfer.
What I'm also noticing when I'm trying to recruit for Control-M positions is that the talent pool is quite small. There's not a whole lot of companies that utilize Control-M, and if they do, most people don't want to let their Control-M resources go if they're good. There is a high barrier of entry for most people to learn Control-M. There are Workbench, Automation API, and so forth mainly for developers to learn, but there are not a whole lot of resources out there for people to get more familiar with administering Control-M or things like that in terms of the technology or even awareness. So, it becomes very challenging to acquire new resources for that. A lot of the newer people coming out of college don't even know what is Control-M. If they do, they think of it as a batch scheduler, which is certainly not true in its current transformation.
Control-M is a very powerful enterprise tool, but the overall perception has not changed in the past five to six years that I've been working with Control-M. There's not much incentive for people to dive into that world. It is a very small community, and overall, the value of Control-M is not being showcased adequately, maybe at the C-level for corporations. I've had multiple conversations with other people and other companies who have already exit using Control-M. About 70% of the companies out there do not take full advantage of the capabilities in Control-M. So, that type of utilization really hampers and hinders the reputation of Control-M. That's because people then acquire this untrue concept that Control-M can only do X, Y, and Z, rather than the fact that Control-M can do so much more. I don't know if it needs a grassroots marketing movement or a top-down marketing movement, but this is what the perception is because that's what I'm hearing and that's what I'm seeing. For some of the challenges that I face working in Control-M, when I go back to my management and say, "Hey, I want to spend more money in this space," they're like, "Why? Can you justify it? This is what we see Control-M as it is. It's not going to bring us value in this area or that area." I have to go back and develop a new business case to say, "Hey, we need to upgrade to MFT enterprise or something like that." So, it definitely requires a lot more work convincing management in order to get all these components. In the past, we had to justify acquiring a workload change manager. We had to justify acquiring the workload archive. All of these bring benefits not only to our audit environment but also to the development environment, but the fact that we had to fight so hard to acquire these is challenging.
For how long have I used the solution?
I've been using Control-M for about eight years.
What do I think about the stability of the solution?
Version 9 was very stable. Once they started adding a lot of the newer Java components, the stability suffered. It seems to have gotten better in version 9.0.20, but that's could be my basic perception.
We run a lot of database client jobs. There are some things that we've implemented that I understand can contribute to the agent instability. We sometimes extract a lot of database output and massage that output using other scripts. I've noticed there are certain things that you cannot do with it, or there are some things that contribute to the instability. For example, in the output scanning functionality, there certainly is a size limit. You probably don't want to scan anything too large because that's going to put a lot of resources on the environment.
In addition, there are times when the agent becomes unresponsive. The jobs get submitted, but nothing executes. There is no alerting function. These are the examples of instability that I've noticed. Overall, the main application itself, the EM, and the scheduler have been pretty stable.
What do I think about the scalability of the solution?
It is very scalable in terms of job execution. I haven't really explored scaling Control-M and the EM environment to a point where we have hundreds of users accessing it at a given time. That's because I don't have a hundred users who want to access that at a given time, but I do understand that you can distribute the web server more, and then have a load balancer to balance the load. I would think Control-M is a fairly scalable application.
In terms of its users, we have a lot of application support folks. We do have some developers who access Control-M mostly for the non-prod environments to execute and monitor their own jobs. There are some software engineers and operational engineers who are part of the application support teams that access Control-M. As for size or concurrent users, we have about 50 concurrent users at the max.
How are customer service and support?
I would probably give them a nine out of 10. For the most part, they're very helpful, but there's always an initial standard dialogue. For an issue, you have to collect from EM logs, agent logs, and so forth, and you submit that. Sometimes, we have done all the advanced work and submitted it, but they still come back and say, "Hey, we need the logs." It seems like that's a canned response without looking at the tickets.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We've been with Control-M for quite a long time. We have not been using anything else in my history with this organization.
I have not looked at anything recently. I am aware there are other application orchestration solutions out there, but I have not felt the need to go explore those options at the time.
How was the initial setup?
If you're deploying using out-of-the-box options, the process is fairly straightforward. If there is some customization that needs to happen, then the process can be complex, and the documentation does not cover some of those complexities.
For the most part, we are standard out of the box. We have run into some performance issues where we had to, later on, go in and maybe make some modifications. For example, we had to stand up different gateways for various purposes just because one singular gateway was not enough to take the load in particular because we had installed a workload archive, and that was just taking up a lot of resources. Other human users were not able to perform their actions because the archive user was consuming so much of the server's resources. So, there was a lot of tweaking there, and we had to basically break out and distribute some of the components.
In terms of implementation strategy or deployment plan for Control-M, the environment always had Control-M, and we just had to upgrade the Control-M environment. We've had Control-M in our environment for quite a long time, probably when it was still version 6. So, as we progressed through different versions, we obviously had to expand the environment and the platforms. We initially started off with Control-M on AIX, and we later moved to Control-M on Linux. As you go to Linux, obviously, there is planning for high availability and production environments, disaster recovery environments, and so forth. So, you have to plan for marrying a lot of the BMC Control-M components and identifying where a load balancer may be required, or DNS ALIAS is required so that you can quickly flip over in the event something happens. Then, of course, there is sizing for the environment in terms of how many jobs are running, how many executions are happening, and so forth. This is how we plan.
What about the implementation team?
We've used the AMIGO program, and then we've performed the upgrades ourselves.
For its day-to-day administration, we have a team of five people. They're administrators and schedulers.
What was our ROI?
Its return on investment is quite high, and that's mostly because we use so many of Control-M's capabilities. We also extend those capabilities. We write our own scripts to be able to integrate Control-M with so many other applications such as Automation Anywhere, Alteryx. We have also done vice versa. We have helped other teams develop their capabilities in integrating with the REST API and Control-M. So, the ROI is quite high for our use case, but based on the conversation with some of the community partners out there, their ROI is probably quite low because they're not making use of all these new features. I don't know if it is because they don't have the skillset to make use of these new features, or their management structure or process structure is hampering them from going out there. A lot of large companies I know like to maintain the status quo, and that's why they're slow to adapt and slow to move, which is going to hurt them in the long run, but in the meantime, it can hurt the adoption of Control-M as well.
What's my experience with pricing, setup cost, and licensing?
Its pricing and licensing could be a little bit better. Based on my experience and discussions with other existing customers, everybody feels that the regular Managed File Transfer piece, not the enterprise one, is a little overpriced, especially for folks who already have licensed Advanced File Transfer. We understand that Advanced File Transfer is going away and is going to be the end of life, and there is some additional functionality built into MFT, but the additional functionality does not really correlate with the huge price increase over what we're paying for AFT already. This has actually driven a lot of people to look for alternative solutions.
I know they are now moving more towards endpoint licensing or task-based licensing. In my eyes, the value of Control-M is the ability to break down jobs from monolithic scripts. You don't want to have to wrap everything up in one monolithic script and say, "Hey, I'm executing one task because I want to save money." That defeats the purpose of controlling, and that defeats the value of Control-M. By being able to take that monolithic script and break it down into the 10 most basic components, you can monitor each step. It is self-documenting because, within Control-M, you can see how the flow will work, and you can recover from any one of those 10 steps rather than having to rerun the monolithic script should something fail. That being said, the endpoint licensing does make more sense, but maybe pricing or things like that can be more forgiving.
Which other solutions did I evaluate?
What other advice do I have?
It is worth the time and money investment to learn more about Control-M. You should learn all the features of Control-M and really explore and test out the capabilities of Control-M. That's the only way people get comfortable with what Control-M can implement. A lot of people aren't aware of just how flexible a platform Control-M is, especially with all the new features that are being added via the Automation API. These features are helping to drive Control-M and things developed in Control-M more towards a microservices model.
We are just beginning to explore using Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. Obviously, there is a little bit of a learning curve for developers as well in order to see the value of developing Jobs-as-Code. Currently, we're walking developers through it, and we're holding their hands a little bit in terms of developing Jobs-as-Code, but we are heading in that direction because it does provide artifacts that you can version control and change quickly and easily. You can redeploy much quicker than just having the jobs defined in the graphical user interface. Previously, when you had to modify it, you either did it via the GUI, or you exported it via XML and then modified those components. Once you get the developers closer to their job flows, then you can theoretically speed up the delivery of applications along with scheduled jobs.
I don't have a whole lot of experience with other scheduling orchestration environments, but from everything that I've heard while speaking with other colleagues, I would say Control-M ranks fairly high. I would rate it a nine out of 10. Control-M usually is the platform that people are moving to, not moving away from.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.