In my company, we use Control-M as the main scheduling and automation tool for ETL processes. It orchestrates data flows from AIX servers to Linux and Windows platforms, integrates with Informatica PowerCenter for data transformation, and also manages dependencies with several network-based applications.
Staff IT Support Data Center Infrastructure at PT.Bank Woori Saudara Indonesia 1906, Tbk.
Real-time monitoring supports efficient job scheduling and error classification
What is our primary use case?
How has it helped my organization?
Control-M has significantly improved our organization by providing centralized scheduling and monitoring of ETL workflows. It allows us to automate complex job dependencies across different platforms, including AIX, Linux, and Windows. With Control-M, we can integrate seamlessly with Informatica PowerCenter for data transformation, ensuring that data pipelines run consistently and on time.
What is most valuable?
The first aspect is in real-time monitoring. Control-M has good visibility of thousands of jobs, and normally runs at the scheduled time. Control-M scheduling has always executed according to a different schedule, except when incidents occur, such as storage vapor.
What needs improvement?
I think Control-M has room for improvement because it should refresh more frequently.
Buyer's Guide
Control-M
October 2025
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: October 2025.
872,655 professionals have used our research since 2012.
For how long have I used the solution?
I have been working in my current position for four months as a monitoring operator.
What do I think about the stability of the solution?
I would rate the stability of Control-M a 9.5 out of 10.
What do I think about the scalability of the solution?
I would rate the scalability of Control-M at 9.5 out of 10.
How are customer service and support?
I would rate technical support for Control-M at nine point five because it provides separate level error classification, which is a very important feature. The separate level error classification helps in determining the severity of issues.
How would you rate customer service and support?
Positive
What other advice do I have?
Many engineers in our organization use Control-M, including both vendors and internal employees, approximately 100 in total.
I would rate Control-M overall a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Last updated: Sep 9, 2025
Flag as inappropriateHigher Executive Officer ICT at Irish Government
Provides batch management and reduced the need for manual intervention
Pros and Cons
- "It's very easy to use. Compared to other softwares, Control-M has significantly simplified our monthly release process, making it easier to move things forward."
- "There are numerous boxes to tick and things to check to ensure everything is in order before the upgrade happens. The process is very long"
What is our primary use case?
We use Control-M for batch automation. Previously, all of our batch work was manual, but now Control-M has significantly reduced the need for manual intervention. As a result, our batch processes are now 99% automated.
How has it helped my organization?
It's so easy to navigate, and especially for new hires, it's very straightforward to show them around the client because it is user-friendly. It's very easy to use. Compared to other softwares, Control-M has significantly simplified our monthly release process, making it easier to move things forward.
What needs improvement?
We're upgrading Control-M, and the process is very long. There are numerous boxes to tick and things to check to ensure everything is in order before the upgrade happens. We run three instances of Control-M, and making various changes for each is challenging.
For how long have I used the solution?
I have been using Control-M for five years.
What do I think about the stability of the solution?
You might experience a brief connection issue, but it usually resolves within a few minutes. The problem is related to the web server.
What do I think about the scalability of the solution?
Scalability is excellent. We utilize only about 20% of Control-M's capabilities.
How are customer service and support?
Support is helpful, and the online community is very good. There's the community forum, which I use regularly to find answers to questions. BMC has been very helpful in that space. They were extremely fast and solved a difficult problem our in-house team couldn't solve in a matter of minutes.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup is straightforward. We used to use in-house software.
We have three different environments where people can work. People can use our development instance of Control-M to work on their batch processes before they go live, allowing them to experiment and refine until they get it right.
What other advice do I have?
It's much simpler now. Everything was a manual batch job. Using the features of Control-M every day makes our batch processing so much easier.
It makes our lives so much easier. For our operations team, which runs our daily batch overnight, viewing everything as it happens has been an absolute lifesaver, especially if things go wrong overnight. It's great to have that visibility. It has also sped up our process, reducing overhead and weekend overtime. Batch processing is much quicker now, resulting in fewer manual errors.
Control-M has so much functionality that even if you initially purchase it to handle a specific part of your batch work, it can offer much more. We've progressed beyond traditional batch processing to include MFT, which has been incredibly useful. Our file watchers and other automation features have significantly simplified our workflows and made our lives much easier.
Overall, I rate the solution a ten out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Control-M
October 2025
Learn what your peers think about Control-M. Get advice and tips from experienced pros sharing their opinions. Updated: October 2025.
872,655 professionals have used our research since 2012.
Sr. Automation Engineer at a computer software company with 1,001-5,000 employees
Saves time, offers great auditing capabilities, and has good automation
Pros and Cons
- "It has certainly helped speed things up."
- "They can improve their interface."
What is our primary use case?
I've been with the same company for 22 years. The use case started out truly as a batch processing solution. That was what we originally got it for back in the day to help us automate what was being done manually or being done through homegrown tools or scripts, et cetera. The use cases evolved through the years. Now, we use it to orchestrate workflows that are touching traditional data centers and that are going out to the cloud and bringing it back.
From one spot, we have a single pane of glass. Like many companies, our systems are getting more complex and more diverse, with cloud and edge computing, containerization, et cetera. However, we have one place where we can go and look and see what's going on. If something happens, we can check what happened and where it happened. Today, we're dependent upon a lot of services and cloud technology that sometimes we don't know the ins and outs of.
A big challenge is to make sure that we have certain things run daily or on a periodic basis. That really was the driving use case. We had a lot of manual tasks going on and if someone, for example, left on vacation, something may not get done for two or three days, a week or two weeks. This solution takes all that away.
The main use case was to get away from having to stare at a system or a screen, and just let things run, let the workflows flow, and only be notified if there's something wrong. That was really a big driving use case.
How has it helped my organization?
It freed up people to work on exciting work instead of mundane work. No one has to sit around and stare at that screen all day long. No one has to reinvent the wheel for the 50th or 500th time to do tasks like maybe put a file out into an S3 bucket or out into an HDFS Hadoop file store since it's already there. It's already done for them. They just drag, drop, click and they're done. It's freed people up and they can do the exciting work that is really what we should be doing anyway. No one wants to be doing boring work.
What is most valuable?
I am a big proponent of an automation API and Jobs-as-Code. That is Control-M in the DevOps world. It opens up the tool to a traditional operations tool. Developers can jump right in there now, giving them that ownership, and integrating the existing DevOps tools that they have. That is a huge feature that I just love.
There's an application integrator. It doesn't matter if you're trying to integrate with on-premises, off-premises, API, container, or serverless functions, it's easy for you to design. You just design that integration and then it's available instantly, and that's a huge time saver.
It's rather easy to create, integrate and automate data pipelines with Control-M. I can give a broad answer. It can be as easy as drag and drop, or it can be as complex as designing the integrations. If you use customization, you can access a data lake that your organization developed. For the typical user out there, the difference is on a scale of one to five, with one being easy and five being hard, you're probably looking at a two and a half. For most people, it's very easy. It's getting easier as it's all web-based nowadays. Alternatively, it can be all code-based.
I have not explored Python Client too much. I've tinkered with it and that's been the limit of my exploration. Now, with the integrations like AWS, we've made extensive use of it, and it is very easy for anybody to do. Python Client has a lot of great possibilities, especially in the data science arena, however, sadly we have not had an opportunity as of yet to play with it.
The Control-M interface for creating, monitoring, and ensuring delivery of files as parts of your data pipeline has gotten better. It is not perfect. That said, it’s come a long way over the years. Nowadays, most of it is web-driven. A lot of it can be API driven if you so wish. There's still probably some future work to be done there, however, for the average user that's coming in, starting to use it for the first time, they're going to need a little training and handholding at the beginning for maybe the first week or so. Then you can start setting them free to go out and use it on their own.
The orchestration of our data pipelines and workflows has been able to give a single point of view too. Management doesn’t care about the bits and pieces. A workflow or a data pipeline could have 100 or 1,000 components behind it, and management does not care about that. Management cares whether the SLA has been met or not. They want that easy-to-see red light or green light. We can provide them with that. The solution drives self-service and it helps. A manager doesn't have to call somebody in IT and wait around for an answer.
They can immediately get that information for themselves, consume it and be able to understand that, "Hey, you know what, this data pipeline over here, we're going to be 15 minutes off our SLA for today." Then, they can start asking why. I like parts of Control-M like Batch SLA Impact, is they can start doing some of that analysis themselves, for example, “this late due to the fact that maybe the system was down for maintenance for two hours last night." That's really beneficial in today's business world.
The automation of Control-M has sped up everything. We can integrate directly into existing pipelines and the DevOps teams can get anything integrated with their Jenkins deployments. They don't have to wait for traditional operation functions. This is all built-in. It validates and checks. In some cases, it automatically deploys the agents and deploys the configurations. That's something that years ago you'd have to wait for. The speed of delivery has vastly improved.
Nowadays, auditing is as simple as running a report. If this falls under an auditable category, we can just hit a button and the report is done. Control-M audits everything, even if it is not under the regulatory or audit spotlight. Every process, every movement, and every change is logged by the system. If there's ever a question, you’ll be able to find a why and a when. There’s an audit trail.
It certainly helped speed processes up. I can eliminate what I call the manual gaps between certain features. I don't have to send an email to somebody to say, "Hey, guess what? That file's ready. Now you can run process X, Y, Z." The system just says "Hey, the file is there, let's go." It's eliminated those gaps between parts of the workflow. It also helped optimize the infrastructure needed as it's like a Tetris Puzzle. I have these ten different workflows that I'm trying to run and before I may have had ten dedicated systems for them. Now I know that I don't need that.
We use this model all the time. We can run those ten processes on three systems and be just fine. That saves money. The solution is not only speedy, but it also saves money.
They are doing a great job with continuing to drive the open-source model of it. Five years ago, if you looked for Control-M anywhere, you would not have found it. Today, that model has changed. They're actively publishing on GitHub.
You can download for free an entire container and run Control-M at home if you want to tinker with it. That was unheard of a few years ago. You can type a query in Google and start to see all sorts of documentation that is now available to the public. The major strides that they have made there are pretty darn good.
What needs improvement?
If you want to take it and ramp it up to doing some very heavy-duty integrations, you can find yourself at first dealing with a difficult integration. However, once you get that integration going for maybe a month or so, the next person after you will have less difficulty. That's the power.
They can improve their interface. They're going through huge modernization efforts and they're getting there. They're probably 75% there, however, there's still another 25% to go.
For how long have I used the solution?
I've been using the solution for 22 years.
What do I think about the stability of the solution?
Since it supports business, it has to be stable. It's very stable. We have not had major outages or anything. That's always a good thing, however, like with any solution, its stability is going to depend on how you deploy it and what safeguards you put in place, including high availability and disaster recovery, et cetera. All the hooks for that are in the product, however, it's up to you to decide how you're going to use those hooks.
What do I think about the scalability of the solution?
It's highly scalable. You can run five things in it today and easily scale up to run 1,005 things tomorrow. In terms of scalability, there are no issues there.
How are customer service and support?
Technical support tends to be very helpful.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
I used to work for an insurance company and I used Computer Associates. It was called CA-7 and CA-11, which are similar tools.
We tried to use Computer Associates before this, but it didn't support the systems we needed and the integration was next to impossible.
How was the initial setup?
I was involved in the deployment and initial setup of the solution right from the beginning.
We had jobs and workflows running within the first day. That was pretty good. We don't use the Helix model, however, there is a Helix model you can purchase, in which everything's hosted by BMC. You can be up and running literally in hours which is reasonable. There's a learning curve, however, if you do not get some value out of it within two days, you're probably doing something wrong.
At the time, there were only two of us deploying the solution. Today there are only three of us.
It's business-wide. Everything from data to marketing, to finance, even though it probably wouldn't make sense to anybody else, it touches everything. It's deployed across Windows, Linux, containers, VM, cloud, et cetera.
If anybody has a use case or wants to learn more about it, we'll show them. Anybody in our organization can get basic access and can tinker around in an alpha test environment. This includes non-technical people. We have non-IT people that use it.
If they can self-service and maybe design some parts themselves, that's a huge win right there. We have a very open model of deployment.
There are occasional patching and vulnerabilities that come out. Most of the patching nowadays can be automated if you're using the Helix-based solution. A lot of that is handled by BMC.
What about the implementation team?
We did not use an integrator, reseller, or consultant for the deployment.
What's my experience with pricing, setup cost, and licensing?
I can't speak to the exact licensing costs.
Which other solutions did I evaluate?
Every few years we go through a reevaluation. We'll go through and look at what's on the market and what companies have come up with or released new versions. We'll go through and we'll say, "Okay, let's compare these, what do we need and what are all the tools offered out there?" We do that roughly every five years and it keeps us on our toes.
The biggest difference as of late is the API and Jobs-as-code. Control-M is light years ahead of others. It is light years ahead of the competitors and what they're offering. Other competitors are starting to get APIs, however, only BMC is working with Job-as-code and is in the lead. To my knowledge, they're really one of the only ones who can define your entire workflow as code.
What other advice do I have?
Control-M is pretty critical to our business as it runs many different business processes every day, and if it wasn't there, we would probably hire many more people, be a lot slower, and be more prone to error.
We use a hybrid deployment. We have parts in the traditional data center. We have parts in the cloud. We sometimes have parts that live on containers. They only exist for two minutes. It is very much a hybrid mix of goodies with our deployment.
I'd advise potential new users to examine it today and not think about what it did ten years ago. Control-M is an old product. It has been around since we all used mainframes, however, just because something's been around for a long time, doesn't mean it's a piece of junk or doesn't work with modern technologies. It has adapted and grown with the times. Control-M did cloud-based work before many of us were even talking about the cloud. It's hard to get rid of negative perceptions sometimes, however, the best thing for people to do is to head out to the internet, look it up, and go out to GitHub.
If you have a technical team, send them out to GitHub. You can download everything in an image or in a container and try it yourself. It doesn't cost you a nickel.
I'd rate the solution nine out of ten.
The biggest advice I can give is to try it out. Don't only believe what the PowerPoints tell you. There's no excuse that you can't have a deployment running clearly within hours. Be willing to think about how it can solve problems in new ways. Sometimes we try to find a new tool as we have a square problem and we get upset as all the tools we're looking at only have round solutions. Sometimes the reason that it only has round solutions is due to the fact that that's the proper way to solve the problem. You have got to be willing to break down whatever you're trying to do, whatever workflow you're trying to automate or integrate, and take it in pieces.
If all you want to do is save yourself a lot of money, use Cron, and use Windows Task Scheduler. However, if you want to take your business to the next level and start to get to the point where you can automate to remediate and audit, that's where tools like Control-M come into play.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Operations Engineer at West Bend Mutual Insurance Company
Saves us thousands of hours, is widely applicable, user-friendly, and features top-notch reporting
Pros and Cons
- "The reporting is top-notch. I haven't found any other applications on the market that can replicate what Control-M offers. The alerting is very good, and I think their service monitoring is the best in the industry."
- "The stability could be improved. I ran into an issue with a recent Control-M patch. The environment would become unstable if security ports were scanned. This is an area they need to improve on, but ultimately it's a relatively small improvement."
What is our primary use case?
We use the solution in Western Mutual Insurance Group's environment for the daily scheduling of around 11,000 jobs. Our number of end-users is in the hundreds, across 18 to 20 teams. We have three different physical locations as a company. Since COVID, we are a partially remote workforce as well, so we have multiple locations.
It's essential that the solution orchestrates our workflows. Regarding processes like file transfers and data workflows, we want one source for that. We want one area where we can check and see how things are progressing, and Control-M is invaluable. Everyone has access in our environment to Control-M, and we all use it heavily. We utilize a ton of plugins in our environment. We started the transition into servers and are seeing what our license allows in that area. We try to take advantage of everything we can.
We use Control-M to replace a lot of our manual logging of job data. It's been very valuable in terms of logs that can output alerts.
I just did an audit earlier this year, and it was a swift process using the product. It took me less than a few hours, and without the solution, it would potentially take a couple of days to a week.
We essentially have a nightly batch cycle. We process data overnight, so it's available for end-users during the day. Using manual execution, instead of Control-M, this nightly batch cycle would transition into a weekly or monthly batch cycle instead.
How has it helped my organization?
I recently took over as admin of Control-M about a year ago. Since then, the question has been how we can further utilize Control-M in our environment. We haven't yet found the limits of what Control-M can do. We're finding better ways to apply it every day. From the old days when we manually scheduled jobs to the current paradigm of using an automation tool. This made the process much more manageable.
We define Control-M internally as a "critical business application." I would say that if Control-M were not available, the impact would be catastrophic to our business.
What is most valuable?
The reporting is top-notch. I haven't found any other applications on the market that can replicate what Control-M offers. The alerting is very good, and I think their service monitoring is the best in the industry.
The solution is a key part of our system and I have not seen any significant limitations with it. It's very reliable and performs as advertised.
We're just starting our data pipeline journey. Compared to other products in the market, I believe Control-M's is the easiest to use. Theirs came out ahead in terms of ease of use every time. I rate them very highly in that area. We're primarily an Azure corporation. We found that the solution's built-in integrations with Azure are straightforward to use.
We actively build out methods of alerting, for instance, when workflows in Control-M don't complete, as this impacts our end-users and our managers that support the teams attempting to provide data for the end-users. I think Control-M has a ton of built-in integrations that make alerting when that data is unavailable more visible to end-users. I think that's been very useful in our environment.
What needs improvement?
The stability could be improved. I ran into an issue with a recent Control-M patch. The environment would become unstable if security ports were scanned. This is an area they need to improve on, but ultimately it's a relatively small improvement.
For how long have I used the solution?
We have been using the solution for around seven years.
What do I think about the stability of the solution?
One patch had some issues, but the fix pack was very helpful. Other than that, we haven't had any stability issues with this product. So I'd rate it very highly.
What do I think about the scalability of the solution?
The scalability is excellent, we're looking into options in Azure for scaling up and down in our environment, and Control-M has been essential in accommodating that.
How are customer service and support?
Technical support would be a 10. They're always available. They've been very helpful with any questions I have. There are multiple means of contacting them, and they've always been responsive. The technical account partner, Jake, has been very helpful. The account rep, Chris, has also been very responsive.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
Control-M in our environment predates my time. I believe the company first implemented the solution around 15 years ago.
How was the initial setup?
The initial setup was before my time. We started off as a mainframe exclusive influence of Control-M, and then we transitioned to distributed servers from there. I am a team of one.
What was our ROI?
The solution's automation has improved our business service delivery speed. Our big push this year has been toil reduction and automation of manual tasks that ultimately take time away from our engineers. Control-M is factored into probably north of 80% of those reductions with its ability to automate tasks. So far this year, we're at about 4,000 hours of toil reduced. I would say Control-M has played a factor in 3000 of those hours.
What other advice do I have?
I would rate this solution a ten out of ten. Control-M is critical to our business.
There are other solutions like Control-M out on the market, but in every recent market evaluation, Control-M has always come out on top. I think they are becoming more cloud-native as they progress with their Control-M Web Services. They're more reliable than the others on the market right now.
I would advise anyone to start with a trial version of this product. I think they'll be very impressed with it.
We don't use Python to a significant degree at all in our environment. We have been looking into that, but nothing solid yet. We don't use AWS but are looking to get into it in 2024.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Automation Engineer at CARFAX
Integrates with many solutions, significantly improves our execution time, and has a good price-to-performance ratio
Pros and Cons
- "Our ability to integrate with many different solutions has been invaluable. The new approach of the automation API and jobs-as-code is also valuable."
- "The biggest improvement they could have is better QA testing before releases come out the door."
What is our primary use case?
We use it for our workload automation. We use it as a single pane of automation for our enterprise.
We are currently using three different environments for three different productions. We have production data tasks, and we have multiple different levels spread out.
We are currently using its most recent version. In terms of deployment models, they have both models. They have an on-prem solution, and they also have a SaaS solution. It just depends on what your company needs. They can take care of you.
How has it helped my organization?
Over the past so many years, I have learned that one of the most important features is giving everybody one tool that can do many different types of automation and workflows. That's been invaluable. Instead of having multiple tools for different teams and different platforms, Control-M has become the one-stop-shop for a lot of these automations.
It is very easy to create, integrate, and automate data pipelines with Control-M. It allows us to ingest and process data from different platforms. It could take us anywhere from a day to a week to get a new integration in place. We've taken it upon ourselves to try to introduce that to all of our internal customers as well.
It can orchestrate all our workflows, including file transfers, applications, data sources, data pipelines, and infrastructure with a rich library of plug-ins, which is very important for us. We try to utilize all new plugins that come out. If our company uses it, we try to use that plugin at least somewhere in our infrastructure.
In terms of creating, monitoring, and ensuring delivery of files as part of our data pipeline, it is a recent project, and it is something I've been learning about recently. However, having the ability to set up a job, set up a connection, deploy that job, and automatically have the feedback on where your files are when they've been moved has made life five times easier.
It has had an effect on our organization when creating actionable data. It has decreased the time to resolve dramatically. Everywhere I've worked, having Control-M orchestrate those alerts has been invaluable.
Our internal customers and management really appreciate the ability to be proactive before things really devolve into a problem or a high-severity incident. We're trying to incorporate analytics and proactive notifications as much as possible to decrease our downtime dramatically.
It impacts our business service delivery speed. Within the past few years, we have taken projects that normally would have taken multiple months, but the duration came down to a couple of weeks. So, we've increased our productivity tenfold.
Its impact on the speed of our audit preparation process has been great. With some of the built-in tools and some of the built-in reporting, being able to pull that data at any given moment has aided audit and probably increased our personal response time tenfold. We're able to get reports and audit out to the requesters within a week, if not sooner. Without Control-M, it would typically take us at least a month or so to get that out.
It has dramatically improved our execution times. We're able to get solutions out the door much quicker. A lot of our automations have been built around that, and we're able to get valuable output relatively quickly. When developing a new solution, without having Control-M, we would spin our wheels trying to come up with something that could only do a fraction of what Control-M can do at this point. Especially for a new solution or a new execution, we would be looking at a couple of weeks if not a couple of a month or two to come up with something deliverable. With Control-M, we're able to get that down to a week or two.
What is most valuable?
Our ability to integrate with many different solutions has been invaluable. The new approach of the automation API and jobs-as-code is also valuable.
What needs improvement?
The biggest improvement they could have is better QA testing before releases come out the door.
For how long have I used the solution?
I have been using this solution for about 10 years.
What do I think about the stability of the solution?
I love it. It is rock solid. It is very stable.
What do I think about the scalability of the solution?
There are no limits. You can easily scale up depending on your workload or whatever you need in a very short time. You can pretty much automate it at that point.
It is being used extensively in the organization. We do have multiple locations, but because we're using a web client, it is hard to say exactly how many end users are using it at this point. It is a company-wide solution. So, we probably have a couple of hundred users at this point.
How are customer service and support?
They're very responsive. I'd rate them a 10 out of 10.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
I personally have always used Control-M as my primary. I do know that other companies have experimented in the past, but I've always come back to Control-M.
How was the initial setup?
I wasn't involved in the deployment. I always came on a little afterward.
In terms of maintenance, it is relatively maintenance-free besides the patches that come out. They come out pretty and frequently, but when they do, they're pretty comprehensive. Other than that, maintenance is pretty minimal. Because it is low maintenance, our engineering team does the maintenance when required.
What was our ROI?
We have absolutely seen an ROI. Over the last five years, I've heard we've done price analysis, especially with other tools. We always come out on top with Control-M. It always has the best price-to-performance ratio.
It is critical to our business. I don't know the facts and figures, but from anecdotes and talking to other management and up levels, I can say that it is considered a priceless solution in our environment.
If we no longer had Control-M, a lot of our most important pipelines would fall apart. Workflows would go unnoticed. The automation is so deeply integrated at this point that there's no telling what would break at this point. There may be things that we're not even thinking of.
What's my experience with pricing, setup cost, and licensing?
For the tooling that you get, the licensing is acceptable. It has competitive pricing, especially with all the value that you get out of it.
There are additional costs with some of the additional modules, but they are all electives. Out of the box, you get the standard Control-M experience and the standard license. They're not forcing some of the modules on you. If you decide that you do need them, you can always purchase those separately.
What other advice do I have?
I would advise working with the engineers, reading the documentation, and going into it expecting to set up high availability.
Control-M has been around a while. They're very quick to market, and they're very quick to adapt. At this point, they do have offerings, either on the way or recently released, that can support multiple cloud environments.
We are currently not using the Python Client, but that is on our board, and I do intend on investigating. We are utilizing some parts of the AWS integration.
I would rate it a 10 out of 10.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Maintenance Manager at a transportation company with 10,001+ employees
We have seen quicker file transfers with more visibility and stability
Pros and Cons
- "If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it."
- "The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability. We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better."
What is our primary use case?
We schedule the majority of our SAP jobs Control-M. We do that globally for all the production plants. We have tens of thousands of SAP jobs and managed file transfer.
SAP batch and managed file transfer are critical processes that we have automated. We are in the process of replacing Connect:Direct and SecureTransport, the legacy file transfer solution, with Managed File Transfer (MFT). That is on the global scale.
The Control-M for Informatica is gaining a lot of popularity, primarily in the financial side of the business. They have a lot of security restrictions that make their jobs very difficult. Also, there are cost issues for Informatica, e.g., anytime they execute a workflow in Informatica, they get billed for it. We are adapting the solution to not scrum the workflow every half an hour or hour because they pay for it, but only when it is needed. Therefore, we can do a database query and check if there are new records that need to be processed. If there are no records to be processed, then depending on that output, we either run the Informatica job or leave it and check again for maybe half an hour. We are optimizing, saving money for the customers and ourselves, while reducing the number of executions, jobs, etc.
We are using on-premises. We have been for many years. We are aware of the new Helix offering, which is a SaaS/cloud offering from BMC, but it is not really ready for enterprise yet, not at our scale. We are doing some cloud, though not the Helix offering. I have installations in the cloud using Azure and AWS. We are not fully functioning there yet. We are waiting for the demand, but we are aware of the cloud opportunities and making use of them.
We have been busy upgrading to version 9.0.20 Fix Pack 100 but our production environment is still on 9.0.19 Fix Pack 200.
How has it helped my organization?
We use Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. We have found that a lot of the new customers who are developing for cloud prefer to use the API and would like to test for themselves. That is really where Jobs-as-Code comes in. They can test and fail quickly the agile way. We definitely have some customers who are using that.
We have seen quicker file transfers with more visibility and stability. Because data transfers are part of the Control-M tool, they form as part of the normal workflow. We see the value in that.
If they have ad hoc requirements, then they can theoretically schedule their own file transfers with the Self Service. We are trying to push as much work back to the customers or developers that have that requirement, because they prefer to help themselves, if possible. We try empowering them and enabling them through Control-M, especially for file transfers, because it is a much broader base of the business then just with batch scheduling. Typically, with SAP batch scheduling, it would work with dedicated teams. With file transfers, the entire business is involved. There are business users, end users, etc. It definitely needs to be as simple as possible and as managed as well as possible. They need to manage it themselves, if possible, because our team is not growing but the number of customers, applications, and jobs are growing. We need to hand back some of the responsibility to the customer for them to resolve and action it.
What is most valuable?
A new feature, which we deployed about two years ago, is the Managed File Transfer (MFT). We also use Managed File Transfer Enterprise (MFTE) for external transfers of our biggest use cases.
Another valuable feature would definitely be the MFT dashboard that is now available in Control-M natively. It is easy to just search for jobs, files, etc. Instead of the customers contacting us to find out what happened, when it happened, and why it happened, they are able to service themselves. This allows us to cut down on operational staff, costs, and time because customers can manage it themselves to a degree.
The most valuable feature is definitely the Self Service. A couple of years ago, it was available, but not with the features that it is today. There wasn't really uptake on it, although it was available. We have seen a steady growth in the number of users using it and what they are using it to do. They are using Self Service to schedule by themselves and do monitoring by themselves. They interact with their schedules. Also, the performance of Self Service is very user-friendly and more accessible. That is one of the features that we use a lot lately.
The reporting has definitely improved over the years. We are definitely doing more of that as well. We are definitely seeing more value in reporting on the batch schedules, optimizing it and seeing if we can cut costs.
What needs improvement?
The reporting has improved. It is not where it should be yet, but we have seen improvements. The biggest thing for me is the restrictions regarding templates for reporting. You can't create your report with your own parameters. We have a meeting weekly with BMC and our customer lifecycle architect, and this comes up quite frequently. We have been privileged enough to do work with the developers. They are aware of the requirements regarding reporting and what our customers are asking for.
What I found lately about the YouTube videos, specifically, is that they are very simple. Usually, when I watch a video, I would read the manual, instructions, etc. to see if I understand it. I would hope that the interactive sessions, Q&As, or videos could be used to handle more complex issues of what they're discussing. An example would be the LDAP authentication for the Enterprise Manager. They would typically just go through the steps that are in the documentation. What people typically looking at those videos are looking for is how to do the more complex setup, doing it with SSL and distributed Active Directory data mines. Things that are not documented. I find those videos helpful for somebody who is too lazy to read the manual. I expect them to handle more than what is available in the documentation and the more complex situations.
The high availability that comes from BMC with its supplied Postgres database is very limited. Even using your customer-supplied Postgres database is problematic. We have engaged with them regarding this, but it is difficult. My company doesn't want to do this and BMC doesn't want to do that. We just need to find some middle ground to get the proper high availability.
We're also moving away, like the rest of the world, from the more expensive offerings, like Oracle. We are trying to use Postgres, which is free. The stability is good. It is just that the high availability configuration is not ideal. It could be better.
For how long have I used the solution?
I have been using Control-M for 12 years.
What do I think about the stability of the solution?
Control-M is really stable. We have seen that throughout the years. I have had customers who have been running version 6.3 for seven years after support stopped. It has been running for three years straight, without a reboot or restart, doing its job. We have actually had issues with customers who don't want to upgrade. They have said, "This stuff is working perfectly. Just leave it alone because it just doesn't go down."
We have a saying in our department as well. When somebody says there is a problem, we say, "It's not Control-M. Check everything else. Check the server, network, and database. It's not Control-M." 99 out of 100 times, we are right. It is either infrastructure or something else, but it is not Control-M.
What do I think about the scalability of the solution?
I have never run into any problems scaling, either vertically or horizontally, with Control-M. In each version, it just gets better. I am really happy with that.
We were one of probably the first companies who bought MFTE, and it was not ready yet. It didn't scale properly. It didn't offer the functionality that the competing tools that we were currently using had. It's grown tremendously because of our input and feedback directly to the developers and BMC. I'm not complaining about it, but it put us back a bit. We have learned not to be a very early adopter. We have seen the same with the cloud. Everybody wants to jump on the cloud, but nobody knows why. They just want to do Cloud. We've made a substantial investment with MFTE. It was a couple of hundred thousand euros, and it was not ready yet for our enterprise requirements.
Our monitoring team who does 24/7 monitoring. They handle the alerts. They check their job flows. They make sure escalations are going through. If tickets need to be logged, make sure that gets done. They also interact with ad hoc requests from customers.
There is the scheduling team who does the job definitions, updates, etc.
There is the administration team, which I'm part of, with administrators who look after the infrastructure, Enterprise Manager, servers, agents, gateways, etc. Recently, we also have a dedicated MFT team that only looks after MFT because of the huge number of customers, requests, and requirements.
Other customers who use it are really all across the board. We had a presentation last week to our bigger department that is worldwide, but which we are a part of in South Africa. We have noticed about 52 main departments, then the sub-departments, between them. A lot of them sit right across the enterprise. Typically, the most active users would be SAP users who checks for output on the jobs running on Control-M. It is just 10 times easier to do it in Control-M than on SAP itself. We also manage to keep the output for longer than SAP. What they can't find on SAP after seven or 14 days, they can usually find with us, e.g., outputs for the jobs or logs.
There are the MFT users who love being able to see each morning that their file was transferred, how long it took, and how big the file was. A lot of self-service users are using the Self Service function. Team leads and operational staff use it most.
How are customer service and technical support?
I love support and the support people. It is very good. Because we are quite a mature customer and the whole team has a lot of experience (sometimes more than the support people), if they don't realize the seriousness of the situation, then we would not escalate but just to make our customer lifecycle architect aware by saying, "We are not feeling this case is getting the required personnel on it. We need somebody more senior. We don't have time to cover the basics that the first line support is trying to deal with. We've been over that." Overall, I would rate the technical support as nine out of 10.
Which solution did I use previously and why did I switch?
Previously, we used a big SAP solution, which was not a commercial, and specifically designed for our company.
We have recently taken over a mainframe migration as well as the scheduling was on TWS, which is IBM's scheduling software on the mainframe z/OS. We moved that all over to Control-M. That was a combination of SAP jobs, Informatica jobs, database jobs, and normal script jobs. So, we use a bit of everything. We have also used the automation API a lot for interfacing with Control-M and other middleware tools, but primarily it is SAP and file transfer.
We use Control-M to integrate file transfers within our application workflows. It integrates with the tools that we are replacing, i.e., Connect:Direct, which is quite a legacy tool, and our old IBM tool, which we have been using for more than 15 years and has no visibility. With Control-M, you get visibility on your file transfers and how it mostly interacts with your batch schedule. Something gets created, it's sent over, and then it gets processed. Control-M has already been part of the executing, extracting, import, or processing. Now, with the file transfer, customers can see the entire workflow from the data being generated, transferred, and processed. This resolves a lot of complexities because you used to need to contact three different teams to find out if the file arrived and was processed. One tool does all of that now.
There isn't a lot of new functionality that our previous tools didn't have. It is just re-consolidating all the tools that we need into a single one. That makes it much simpler. There is one team to contact globally for file transfers, and that makes it easy. It provides visibility with its Self Service that wasn't available with Connect:Direct or SecureTransport. Our customers are quite happy to have that. We can also provide reports.
SecureTransport competes with MFTE. There isn't a conversion tool for that yet. Connect:Direct simply provides the means for a conversion tool, but it gets integrated into scripts and applications. It's very difficult to migrate or extract that data.
How was the initial setup?
The initial setup is straightforward. It changed a lot over the years as well, but in the nicest way. You have minimal downtime with the upgrades on Enterprise Manager as well as the Control-M servers. A lot of preparation is done before the tool is shut down for the upgrade. Our downtime used to be at least an hour for upgrades or migrations. That has typically come down to 10, 15, or 20 minutes, depending on the size of the server. It is definitely more stable and understandable.
We have also noticed that the exception handling is much better if there are issues. We don't get that many surprises. The errors are understandable. The agent upgrades have zero downtime, so that is just amazing. All the patching and maintenance is centralized. We have migrated our development and integration environments to 9.0.20 in the last month or two. That went very smoothly. We will start with production next week. We have been through this quite a number of times. We came from version 7 to version 9 to versions 9.0.19 and 9.0.20. We do all the upgrades in-house.
What about the implementation team?
We do it all ourselves. If we get stuck, we would contact BMC. At my previous job, we were a partner for BMC in South Africa, and I was on the support side for BMC. It is only we need to open tickets for bugs or problems that we contact BMC. Typically, upgrades and migrations, we handle those in-house.
There are three people full-time on the administrative side. We have a global setup: Europe, Mexico, America, Africa, and China. We have tons of virtual machines and hundreds and hundreds of agents, and even more that we might host.
What was our ROI?
I know we have already budgeted for more tasks. The company is very happy with the performance of our teams, specifically the South African team. We are really doing more with good tools and less people. There is definitely a return on investment, just from the stability and visibility which has improved a lot.
On the effort side, we have definitely seen a lot of savings. We have some bigger projects that are automating the schedule and removing human intervention. These have reduced department staff/headcount, by about 50%, when we were able to automate the batch side of it, because also our department offers monitoring and operations as part of our service. We have a dedicated monitoring team. Whatever runs in Control-M, that is monitored by us and escalated, if needed.
Departments now have multiple scheduling tools between the mainframe, distributed systems, and cloud. Control-M brings all of that, e.g., we have it on a single pane of glass so we can see the exact execution on the mainframe, the execution on the line, and the execution in the cloud. This is instead of using three or four different tools. Therefore, the complexity of batch monitoring and scheduling has decreased as well with the standardization of Control-M. That is definitely one of the big advantages that we have seen.
What's my experience with pricing, setup cost, and licensing?
It is expensive. We have a lot of customers who complained initially about the costs. Because it's not just the licensing, unfortunately. It's the infrastructure, salaries, etc. I like the licensing model. It is pretty straightforward. We are on the task license. I know that we have some really good discounts. Our BMC account manager makes sure that we stay below the license count as well as checking for growth. Overall, it's good. The licensing is simple enough for me. It is a bit expensive. Especially with the cloud coming in, we might see the licensing change in the future, but I'm guessing.
This is now from my previous years as support for banks and big companies. If it's not enterprise scale, I find that it's too expensive for smaller companies. You really have to be quite big and need to have a dedicated support staff to run it, then you'll be fine. What we've seen at smaller companies, it's too expensive because they want to automate everything. Now, stuff that can literally run once a day for the rest of their lives is costing them $3 a job a day. It becomes too expensive, eventually. They are not seeing the return on investment because it's not business critical. Nobody is going to die or they're going to lose money if that job didn't run exactly at 11 minutes past 4:00. It's definitely for bigger enterprise companies, especially banks or healthcare providers. We have had an instance where Control-M was unavailable due to external factors for 20 minutes and there was a loss of almost a million euros because the solution involved logistics.
Which other solutions did I evaluate?
We have done the usual crontab migration. Everything is in crontab or Windows Scheduler. Typically, we end up with a migration, even if it's from a known tool, where we end by exporting it into Excel and converting it into job definitions with a script. We have been involved in that, but nothing using BMC tools.
When I joined the company, I first supported them through the local partner. Because we have such a vast array of scheduling tools, they went through a PoC and business case. We evaluated three or four tools, where BMC Control-M was one. Quite soon, because the company was already using Control-M in Africa and China, they were looking for global solutions to see if it really could create change.
What it came down to was ease of use, enterprise capability, and BMC was already in the company with ITSM and a couple of other products as well. They had a good relationship with us. We consulted with other customers who have used it as well as references because it was expensive. It was definitely the most expensive solution then, out of the four. However, we didn't want to go five years down the line and then have to change again because of issues.
What other advice do I have?
We have had a very good run with Control-M. I love it.
With the move to big data and especially with our AWS Cloud presence, we have a data lake. We are in discussions with the analytics teams about how they can utilize Control-M in the cloud for analytics, big data, etc. However, at the moment, it is not a big deal.
What we have found with the Jobs-as-Code is that customers need to understand Control-M better, how the scheduling works, the knowledge around it, its conditions, etc. It took some time for the developers to get used to Control-M, then Jobs-as-Code. They are now confident with it. We are presenting twice weekly. We have an open forum for interested parties about Control-M or our department, Enterprise Scheduling and File Transfer, where we have a dedicated session about Jobs-as-Code. If there are questions about how other departments are doing it, if there is a better way to do it, if they are able to save on the number of jobs, can we make them rerun, or instead of creating 10 jobs, can it be done with five jobs? So, there is not a lot going from Jobs-as-Code directly into production, but we have a couple of parties, especially on the cloud front, who are very interested in it.
The solution is enterprise scale. Also, if you want to integrate all your applications into one view and offer all the functionality across the board, such as file transfer, scheduling, cloud, and on-prem, then you can create your own application integrations to integrate with applications that's not supported currently by BMC, like APIs. For top 100 enterprises, there isn't another better tool on the market for enterprise.
I would rate it as a nine out of 10.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
AVP - Systems Engineer at a financial services firm with 10,001+ employees
Allows us to integrate file transfers more readily, resolve issues quickly, and orchestrate a diverse landscape of vendor products
Pros and Cons
- "The File Transfer component is quite valuable. The integration with products such as Informatica and SAP are very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit."
- "A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions."
What is our primary use case?
Control-M supports a lot of business processes. It supports some of the HR functions. I don't know if payroll is directly supported, but we do run jobs through PeopleSoft, which obviously impacts HR. Recently, we've started using the SAP module. So, we're making a transition from PeopleSoft to SAP, and I also see some payroll functions happening there.
How has it helped my organization?
We use Control-M to orchestrate a diverse landscape of vendor products such as Pega, MuleSoft, etc. File transfers and data feeds fetching are quite important for us. So, a lot of data processing happens through Control-M.
Control-M provides us with a unified view where we can easily define, orchestrate, and monitor all of our application workflows and data pipelines. Of course, such a diverse landscape requires you to make the effort to utilize Control-M to tie everything together or to act as the glue. Once you do that, everything is clearly defined, and you can view these disparate systems using one unified pane. If you don't define it correctly, then obviously Control-M won't have that insight, and so you'll have to go to multiple locations to go look at your job statuses.
We use its web interface. It is primarily for the application support teams to go monitor their own jobs. The jobs defined within Control-M are tightly controlled by a specific group of people. There are also people who need access to view that the jobs were completed successfully or why the jobs may have failed. These people are given access through Control-M web to view and monitor the jobs that they support or the applications they support. They're usually able to log on without having to install any client on their personal workstations. So, it's quite convenient. We have not implemented its mobile interface.
The integrated file transfers with our application workflows have certainly sped up our business service delivery by 80%. It has allowed the business to integrate file transfers more readily. Prior to utilizing the Control-M module, people had to write their own file transfer scripts in a scripting language of their choice to vary degrees of effectiveness. With the integrated File Transfer solution within Control-M, there is a standardized way of performing file transfers along with the capability of file watching and grabbing the file names that were transferred, making it much more versatile.
Control-M can immediately report when a job fails. If you have proper monitoring in place, you're notified immediately when your business flows are impacted. In the past, when you run jobs using Cron or just wrote shell scripts, you're really left in the dark because they don't necessarily report even from within Control-M. Implementing Control-M has made the business realize how critical and important it is to have proper error coding within the scripts that they schedule. If the scripts don't necessarily report any errors or redirect the system output into log files, when a job fails, there is no way to detect that.
We've automated many time-consuming business reports and other things that were very manual and took a tremendous amount of manhours. We've also automated a lot of maintenance using Control-M. We've integrated with Ansible Tower. So, we now are able to run Ansible playbooks and Ansible job templates. With the scheduling capability and the multitude of integrations that Control-M offers, it really acts as the unifying glue and as a communicator and orchestrator across the enterprise. With Ansible Tower, you can run a number of playbooks through it to perform patching and reboots and whatever maintenance that the infrastructure teams require, but you can't really do it when the business is still operating, or you can't do it when that business is operating, but you could do it for another business that's not operating at the moment. It is very hard to coordinate that without knowing which lines of business have jobs running or things like that. With Control-M, you can see that and you can actually enact workload policies to put jobs on hold prior to running Ansible playbooks. Once your Ansible playbook is complete, you can release the jobs again by deactivating the workload policies. So, it makes those processes very streamlined.
We do use the Role-Based Administration feature. We have been allowing other groups to gain more control over their agents so that they can define connection profiles, and they can do a little bit more on their side without inundating the main team with a lot of tasks. Everybody is happier. They can get things done faster, and they have immediate feedback and response because they're in control. The main Control-M team is not inundated with a lot of different requests from various teams to do a number of mechanical tasks. They don't get asked to create the connection profile for a database. People have all the information there, and they can do it themselves. They can define it in a way so that only they have access to it.
It has helped us to achieve faster issue resolution. Control-M reports on the error. It is easier to view the system output of that job. Whether it is an Informatica job, a scripted job, or a database job, it is easier to go in and view the issue and then troubleshoot from there. Most of the time, you can be running from the point of failure if the jobs aren't defined correctly. In a properly defined job, I would estimate that there is a 70% to 90% reduction in the meantime to resolution.
It has helped us by improving our service-level operations performance. We've built integration between Control-M and our ITSM, which is ServiceNow, and that has certainly allowed us to gain more visibility within our community through ServiceNow. Every time a production job fails, an incident ticket is cut, and that's highly visible. That needs to be escalated too, and there is a much more defined process to be able to resolve that issue. In the past, obviously, when you didn't have that level of visibility or that integration, there was always time lost in identifying what the issue is.
What is most valuable?
The File Transfer component is quite valuable. The integration with products such as Informatica and SAP is very valuable to us as well. Rather than having to build our own interface into those products, we can use the ones that come out of the box. The integration with databases is valuable as well. We use database jobs quite a bit. The file watcher component is also indispensable when integrating with other applications that generate files, instead of triggering a workflow based on time.
What needs improvement?
We have been experimenting with centralized connection profiles. There are some bugs to be worked out. So, we don't feel 100% comfortable with only using centralized connection profiles. We do have a mix of control on agents out there, which leads to some complications because earlier agents do not support centralized connection profiles.
A lot of the areas of improvement revolve around Automation API because that area is constantly evolving. It is constantly changing, and it is constantly being updated. There are some bugs that are introduced from one version to the next. So, the regression testing doesn't seem to capture some of the bugs that have been fixed in prior versions, and those bugs are then reintroduced in later versions. One particular example is that we were trying to use the Automation API to fetch a number of run ads users from the environment. The username had special characters and backspace characters because it was a Windows User ID. In the documentation, there is a documented workaround for that. However, that relied on two particular settings in the Tomcat web server. I later found out that these settings work out-of-the-box for version 9.0.19, but those two options were not included in the config file for 9.0.20. So, it led to a little bit of confusion and a lot of time trying to diagnose, both with support and the BMC community, what is the issue. Ultimately, we did resolve that, but that is time spent that really shouldn't have been spent. It had obviously been working in 9.0.19, and I don't know why that was missed in 9.0.20, but that's a primary example of an improvement that can happen.
We've also noticed that the Control-M agents themselves now run Java components. Over time, they tend to destabilize. It could be because garbage collection isn't happening, or something is not happening. We then realize that the agent is consuming quite a large amount of memory resources on the servers themselves. After recycling the agents and releasing that memory, things go back to normal, but there are times when the agent becomes unresponsive. The jobs get submitted, and nothing executes, but we don't know about it until somebody says, "Hey, but my job isn't running." When we look at it, it says Executing within the GUI, but there is no actual process running on the server. So, there is some disconnect there. There is no alerting function or the agent there that says, "Hey, I'm not responding." It is not showing up in the x alerts or anything like that.
The integrated guides have not been that helpful to us. I do find a lot of the how-to videos on the knowledge portal to be useful. However, there are some videos where the directions don't always match with some of the implementations. There are some typos here and there, but overall, those have been more helpful for us.
Its pricing and licensing could be a little bit better. The regular Managed File Transfer piece, is a little overpriced, especially for folks who already have licensed Advanced File Transfer.
What I'm also noticing when I'm trying to recruit for Control-M positions is that the talent pool is quite small. There's not a whole lot of companies that utilize Control-M, and if they do, most people don't want to let their Control-M resources go if they're good. There is a high barrier of entry for most people to learn Control-M. There are Workbench, Automation API, and so forth mainly for developers to learn, but there are not a whole lot of resources out there for people to get more familiar with administering Control-M or things like that in terms of the technology or even awareness. So, it becomes very challenging to acquire new resources for that. A lot of the newer people coming out of college don't even know what is Control-M. If they do, they think of it as a batch scheduler, which is certainly not true in its current transformation.
Control-M is a very powerful enterprise tool, but the overall perception has not changed in the past five to six years that I've been working with Control-M. There's not much incentive for people to dive into that world. It is a very small community, and overall, the value of Control-M is not being showcased adequately, maybe at the C-level for corporations. I've had multiple conversations with other people and other companies who have already exit using Control-M. About 70% of the companies out there do not take full advantage of the capabilities in Control-M. So, that type of utilization really hampers and hinders the reputation of Control-M. That's because people then acquire this untrue concept that Control-M can only do X, Y, and Z, rather than the fact that Control-M can do so much more. I don't know if it needs a grassroots marketing movement or a top-down marketing movement, but this is what the perception is because that's what I'm hearing and that's what I'm seeing. For some of the challenges that I face working in Control-M, when I go back to my management and say, "Hey, I want to spend more money in this space," they're like, "Why? Can you justify it? This is what we see Control-M as it is. It's not going to bring us value in this area or that area." I have to go back and develop a new business case to say, "Hey, we need to upgrade to MFT enterprise or something like that." So, it definitely requires a lot more work convincing management in order to get all these components. In the past, we had to justify acquiring a workload change manager. We had to justify acquiring the workload archive. All of these bring benefits not only to our audit environment but also to the development environment, but the fact that we had to fight so hard to acquire these is challenging.
For how long have I used the solution?
I've been using Control-M for about eight years.
What do I think about the stability of the solution?
Version 9 was very stable. Once they started adding a lot of the newer Java components, the stability suffered. It seems to have gotten better in version 9.0.20, but that's could be my basic perception.
We run a lot of database client jobs. There are some things that we've implemented that I understand can contribute to the agent instability. We sometimes extract a lot of database output and massage that output using other scripts. I've noticed there are certain things that you cannot do with it, or there are some things that contribute to the instability. For example, in the output scanning functionality, there certainly is a size limit. You probably don't want to scan anything too large because that's going to put a lot of resources on the environment.
In addition, there are times when the agent becomes unresponsive. The jobs get submitted, but nothing executes. There is no alerting function. These are the examples of instability that I've noticed. Overall, the main application itself, the EM, and the scheduler have been pretty stable.
What do I think about the scalability of the solution?
It is very scalable in terms of job execution. I haven't really explored scaling Control-M and the EM environment to a point where we have hundreds of users accessing it at a given time. That's because I don't have a hundred users who want to access that at a given time, but I do understand that you can distribute the web server more, and then have a load balancer to balance the load. I would think Control-M is a fairly scalable application.
In terms of its users, we have a lot of application support folks. We do have some developers who access Control-M mostly for the non-prod environments to execute and monitor their own jobs. There are some software engineers and operational engineers who are part of the application support teams that access Control-M. As for size or concurrent users, we have about 50 concurrent users at the max.
How are customer service and support?
I would probably give them a nine out of 10. For the most part, they're very helpful, but there's always an initial standard dialogue. For an issue, you have to collect from EM logs, agent logs, and so forth, and you submit that. Sometimes, we have done all the advanced work and submitted it, but they still come back and say, "Hey, we need the logs." It seems like that's a canned response without looking at the tickets.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We've been with Control-M for quite a long time. We have not been using anything else in my history with this organization.
I have not looked at anything recently. I am aware there are other application orchestration solutions out there, but I have not felt the need to go explore those options at the time.
How was the initial setup?
If you're deploying using out-of-the-box options, the process is fairly straightforward. If there is some customization that needs to happen, then the process can be complex, and the documentation does not cover some of those complexities.
For the most part, we are standard out of the box. We have run into some performance issues where we had to, later on, go in and maybe make some modifications. For example, we had to stand up different gateways for various purposes just because one singular gateway was not enough to take the load in particular because we had installed a workload archive, and that was just taking up a lot of resources. Other human users were not able to perform their actions because the archive user was consuming so much of the server's resources. So, there was a lot of tweaking there, and we had to basically break out and distribute some of the components.
In terms of implementation strategy or deployment plan for Control-M, the environment always had Control-M, and we just had to upgrade the Control-M environment. We've had Control-M in our environment for quite a long time, probably when it was still version 6. So, as we progressed through different versions, we obviously had to expand the environment and the platforms. We initially started off with Control-M on AIX, and we later moved to Control-M on Linux. As you go to Linux, obviously, there is planning for high availability and production environments, disaster recovery environments, and so forth. So, you have to plan for marrying a lot of the BMC Control-M components and identifying where a load balancer may be required, or DNS ALIAS is required so that you can quickly flip over in the event something happens. Then, of course, there is sizing for the environment in terms of how many jobs are running, how many executions are happening, and so forth. This is how we plan.
What about the implementation team?
We've used the AMIGO program, and then we've performed the upgrades ourselves.
For its day-to-day administration, we have a team of five people. They're administrators and schedulers.
What was our ROI?
Its return on investment is quite high, and that's mostly because we use so many of Control-M's capabilities. We also extend those capabilities. We write our own scripts to be able to integrate Control-M with so many other applications such as Automation Anywhere, Alteryx. We have also done vice versa. We have helped other teams develop their capabilities in integrating with the REST API and Control-M. So, the ROI is quite high for our use case, but based on the conversation with some of the community partners out there, their ROI is probably quite low because they're not making use of all these new features. I don't know if it is because they don't have the skillset to make use of these new features, or their management structure or process structure is hampering them from going out there. A lot of large companies I know like to maintain the status quo, and that's why they're slow to adapt and slow to move, which is going to hurt them in the long run, but in the meantime, it can hurt the adoption of Control-M as well.
What's my experience with pricing, setup cost, and licensing?
Its pricing and licensing could be a little bit better. Based on my experience and discussions with other existing customers, everybody feels that the regular Managed File Transfer piece, not the enterprise one, is a little overpriced, especially for folks who already have licensed Advanced File Transfer. We understand that Advanced File Transfer is going away and is going to be the end of life, and there is some additional functionality built into MFT, but the additional functionality does not really correlate with the huge price increase over what we're paying for AFT already. This has actually driven a lot of people to look for alternative solutions.
I know they are now moving more towards endpoint licensing or task-based licensing. In my eyes, the value of Control-M is the ability to break down jobs from monolithic scripts. You don't want to have to wrap everything up in one monolithic script and say, "Hey, I'm executing one task because I want to save money." That defeats the purpose of controlling, and that defeats the value of Control-M. By being able to take that monolithic script and break it down into the 10 most basic components, you can monitor each step. It is self-documenting because, within Control-M, you can see how the flow will work, and you can recover from any one of those 10 steps rather than having to rerun the monolithic script should something fail. That being said, the endpoint licensing does make more sense, but maybe pricing or things like that can be more forgiving.
Which other solutions did I evaluate?
N/A
What other advice do I have?
It is worth the time and money investment to learn more about Control-M. You should learn all the features of Control-M and really explore and test out the capabilities of Control-M. That's the only way people get comfortable with what Control-M can implement. A lot of people aren't aware of just how flexible a platform Control-M is, especially with all the new features that are being added via the Automation API. These features are helping to drive Control-M and things developed in Control-M more towards a microservices model.
We are just beginning to explore using Control-M as part of our DevOps automation toolchains and leverage its “as-code” interfaces for developers. Obviously, there is a little bit of a learning curve for developers as well in order to see the value of developing Jobs-as-Code. Currently, we're walking developers through it, and we're holding their hands a little bit in terms of developing Jobs-as-Code, but we are heading in that direction because it does provide artifacts that you can version control and change quickly and easily. You can redeploy much quicker than just having the jobs defined in the graphical user interface. Previously, when you had to modify it, you either did it via the GUI, or you exported it via XML and then modified those components. Once you get the developers closer to their job flows, then you can theoretically speed up the delivery of applications along with scheduled jobs.
I don't have a whole lot of experience with other scheduling orchestration environments, but from everything that I've heard while speaking with other colleagues, I would say Control-M ranks fairly high. I would rate it a nine out of 10. Control-M usually is the platform that people are moving to, not moving away from.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Enabled us to consolidate and streamline our development process, while building on existing skills
Pros and Cons
- "We used Control-M's Python Client and cloud data service integrations with AWS and, as a feature, it was very customizable. It gave us a lot of flexibility for customizing whatever data maneuver we wanted to do within a pipeline."
- "I would like to see them adopt more cloud. Most companies don't have a single cloud, meaning we have data sources that come from different cloud providers. That may have been solved already, but supporting Azure would be an improvement because companies tend not to have only AWS and GCP."
What is our primary use case?
Our use case was mainly about consolidating our data pipeline from different sources and doing some data transformations and changes. We needed to get data from different sources into a state where we could act on it into one consolidated data set.
How has it helped my organization?
It gave us the ability to consolidate a diverse set of solutions into one comprehensive solution that streamlined our development processes. It was straightforward to adopt and we could build on existing skills without having to have 10 solutions for 10 problems.
And when it came to creating actionable data, it gave us the ability to move faster and at scale. By adopting a solution like Control-M, we were able to scale and deliver faster data transformations and maneuvers, turning data into insights in a more efficient and scalable way.
The ability to deliver faster and at scale was important. Business and management always wanted us to deliver faster and bigger and we were able to do both with the solution that we implemented using Control-M. We were able to respond faster to changes and business needs, at scale.
Having a feature-rich solution enabled us to aggregate all of our processes into it, and that made the overall execution, from a project and portfolio perspective, a lot more efficient.
We were also able to respond to audit requests, because it's centralized, in a much more efficient way.
What is most valuable?
There isn't a single feature that is most valuable, but if I had to choose one, it would be the rich ability it gave us for making customized scripts. That was probably the most unique feature set for our situation. We used Control-M's Python Client and cloud data service integrations with AWS and, as a feature, it was very customizable. It gave us a lot of flexibility for customizing whatever data maneuver we wanted to do within a pipeline.
The Python Client and cloud data service integrations have a rich set of features with flexibility. It did not require additional, crazy skills or experience to deal with it. It was a nice transition into enabling a data scientist to leverage existing skills to build those pipelines.
Creating, integrating, and automating data pipelines with Control-M was straightforward. It did require some knowledge and training, but compared to other solutions, it was a lot simpler. Working with data workflows, with the data-coding language integrated into Control-M, was straightforward. The level of difficulty was somewhere between "medium" and "easy." It was not that hard to leverage existing skills and knowledge within this specific feature.
The user interface for creating, monitoring, and ensuring delivery of files as part of the data pipeline was very actionable. It was almost self-explanatory. Somebody with basic user-interface experience could navigate the calls to action and the configuration that is required. It was well-designed.
What needs improvement?
I would like to see them adopt more cloud. Most companies don't have a single cloud, meaning we have data sources that come from different cloud providers. That may have been solved already, but supporting Azure would be an improvement because companies tend not to have only AWS and GCP.
For how long have I used the solution?
I used it for a couple of years.
What do I think about the stability of the solution?
It's fairly stable. I don't recall any specific issues.
What do I think about the scalability of the solution?
It's fairly scalable. For our needs, it scaled very nicely.
We have a shared model where we have a centralized, shared service organization when it comes to data. Different people will use it, but it's centralized.
How are customer service and support?
We used other solutions from BMC as well, and their customer support was always great. I give them a 10 out of 10.
Training or a Knowledge Base were available or you could ask a question by submitting a ticket.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We had DataStage from IBM and SSIS.
The switch was really about streamlining the process. We had other tools that only did partial processes or were not doing it with the speed and efficiency that we were looking for. We were looking for a solution that could streamline things and solve 90 percent of our data challenges.
What was our ROI?
The analysis that I saw validated that the ROI was within a couple of years.
What's my experience with pricing, setup cost, and licensing?
The pricing was competitive, from what I understand.
Which other solutions did I evaluate?
We looked at continuing to use the same solutions we had been using, and there were a couple of other cloud-based solutions that we evaluated. One of them was Matillion. The ease of use was one component of our decision, as was the flexibility of scripting with Python. Those were the key differentiators.
What other advice do I have?
For the on-prem solution, we had to do the patching and whatever was required by the vendor, but the cloud implementation was a model that was usable. The upgrades, changes, and patching are done directly by the vendor.
Control-M was a critical piece of the puzzle, to help us with all the data transformation and projects that we had to do. It was part of either one specific project or even a larger project that required that middle data transformation so that we could get to analytics or any other consumption of that data.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros
sharing their opinions.
Updated: October 2025
Popular Comparisons
Camunda
Appian
MuleSoft Anypoint Platform
Pega Platform
IBM BPM
AutoSys Workload Automation
Automic Automation
SnapLogic
IBM Workload Automation
Redwood RunMyJobs
GoAnywhere MFT
IBM Sterling File Gateway
AWS Step Functions
Temporal
Ab Initio Co>Operating System
Buyer's Guide
Download our free Control-M Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- How does Control-M rank in the Workload Automation market compared with other products?
- What licensing options are there for Control-M?
- What are some of the ways in which Control-M can be useful to my company?
- Can Control-M integrate with AWS, Azure, Google Cloud Platform and other similar services?
- Can Control-M's Application Integrator track job status and retrieve output for executing steps, especially in the context of custom integrations?
- What is the biggest difference between Oracle DAC Scheduler and Control-M?
- How does Control-M compare with AutoSys Workload Automation?
- How would you compare Stonebranch Universal Automation Center vs Control-M?
- Can Control-M emulate all the functionalities of TWS in a distributed environment?
- Which is the best Workflow Automation Platform with microservices?

















