What is our primary use case?
Our primary use case is for file automation: detecting the presence of files, moving files from one system to another, doing FTP uploads, FTP downloads, and a large number of custom execution methods. Custom execution methods are a way to create your own code that extends the JAMS toolset.
For example, in one of our systems, it has a tool that needs to be run in order to import a file into that system, which is very proprietary. However, those file import definitions are dynamic inside of the system; you could have 100 different file formats. We created a custom file import/export method for our system. The JAMS job calls the other system's API. The JAMS job definition tells it the path of the file to load and what parameters to use. It then reads and displays the remote system's API return results. Custom execution methods are the meat and potatoes of what we use JAMS for.
We have a single production JAMS server that serves as the primary JAMS node where most of our work is done. We have an agent server where the primary node issues some job commands to run on that agent. Then, we have a test JAMS server which we use when we are testing execution methods and other things. We have plans to stand up a failover server, but have not done so. The back-end database for our JAMS production system is Microsoft SQL Standard Edition and all our servers are on Windows.
How has it helped my organization?
Automation is subject to a volatile environment. That's reality. You have a client that provides you a file with the wrong naming convention or in the wrong format, or they are supposed to give you a file at 8:00 AM every morning, then one day they simply don't give you that file. Those are the sort of nuisances that create headaches for your production staff as they are trying to work through and detect them. Sometimes, they will fly under the radar, especially if you have a less sophisticated job scheduler running batch jobs, like Windows Task Scheduler. They run at a certain time and are expected to just work. However, maybe two days later, someone finds out that the file, which we normally get every Monday, was not presented to us. Those are the tricky little devils that will get you.
What we do when we develop a JAMS file workflow is we have certain checkpoints that we put into it. If we have a job that wants to run it at a certain time and expects a certain file to exist, we will have the job specifically check for that. If the file doesn't exist, it will create a very specific, actionable alert. We design that to go out to our file processing operators who can respond accordingly by contacting the client. When we are doing an export, if we want to run a file out of our systems, the job that runs the export could detect there were no records that day. It might report that back, then we can act on it. Or, perhaps after the export job run, you could have a follow-up job that checks to see that exactly one and only one file is available in that export destination.
You can't necessarily prevent environmental issues from happening. You can't expect every client to always do what they are supposed to do and give you what they are supposed to give you every day. However, when they don't, at least you can know about it as soon as possible and take action on it rather than finding out about it by accident sometime down the road.
It has drastically improved our capabilities to scale our automation. Before JAMS, there were a lot of manual processes. We had a couple of operators who spent all day doing that. A lot of the time, with human intervention and manual processes, it is as good as the person following a procedure. Human error is a big problem.
Shortly after we adopted JAMS, our file volume started ramping up. The number of files, reports, and other processes that we have had to automate has grown exponentially. We have been able to keep up with that load. JAMS has been able to scale up our organization's scheduling and automation without adding staff. The people who previously did these manual processes are now trained on monitoring the automation and scheduling of those processes. They only step in and respond to issues, rather than running manual procedures all day.
There are many platforms that an organization might use. We have Microsoft SQL server, Artiva, QlikView, and Qlik Sense. All those different platforms have built-in schedulers: SQL scheduler, QlikView scheduler, Artiva scheduler, and Windows scheduler. Without an enterprise scheduler, all those independent schedulers can only be coordinated by time of day. If you want to export a file at 8:00 AM, then set up a scheduled job that runs at 8:30 that loads that file into your BI tool, in theory that should work. However, that sort of time-based, unintelligent scheduling and coordination between systems falls apart when anything goes wrong. Let's say your 8:00 job should be done in five minutes and you have your 8:30 running on your BI scheduler. If that 8:00 job runs long, doesn't produce a file, or if it throws an error, then your BI scheduler doesn't know and just does what it always does. It runs its 8:30 job because there is no coordination. Now, users are wondering why they have a BI report with yesterday's data in it. With JAMS we have chain jobs together in a sequence. The first job throws an error so the second job never runs because there was a problem. An operator can resolve it and resume the sequence.
We have tried our best to consolidate our scheduling and not to use the Microsoft SQL job scheduler, BI tools, and built-in schedulers, but rather to use JAMS and create custom execution methods to schedule everything in one place.
What is most valuable?
The extensibility feature, i.e., the custom execution method ability, is the most valuable feature. We can write a C# interface using the JAMS libraries. We copy the DLLs for the client interface over to our remote desktop and JAMS servers. Then, any of our JAMS users can open up a job definition and see the control developed by our developers. When the job command is issued, it executes our developers' code.
I am happy with the exception handling, for the most part. When an exception occurs on one job inside of a series of jobs, it can make that series of jobs stop running, sending an email to someone to let them respond. There is also a monitor view where you can see everything that is currently running and any of the jobs that are currently in an error state. You can find them and try to rerun the job, or cancel it if the job doesn't actually need to be run.
JAMS will attach the console logs from anything that has an error on the email that goes to the operators. Also, inside of the job monitor, you can go to the logs and dig down into the details to see what went wrong.
It has the ability to use PowerShell to schedule jobs, enable, or disable triggers. The fact that they have JAMS PowerShell cmdlets is useful. This is not central to our use of JAMS, but I appreciate it. While they have extended PowerShell and created cmdlets, I tend to use that when I have to do things like kill all the jobs currently in the schedule, if something catastrophic has occurred. I use them on my test server more than production. On my test server, if I am running a bunch of tests and jobs, but I just want to wipe out the whole scheduler, then I can use a PowerShell command to do that.
From time to time, a job is executed and gets stuck in a loop. It gets hung. Maybe the remote system freezes up. Something abnormal happens. It is pretty easy to deal with those. You can see them inside the JAMS monitor because JAMS will automatically calculate the average time that it takes for a given job to execute as long as it has had a few successful runs. The JAMS Scheduler can predict what should take five minutes to run. If it is running for 30 minutes, there is a percentage that shows inside the scheduler that the job is now at 600% of the normal run time. So, you will see this big number, 600% and climbing inside the monitor. You can research that. You can go find the hung process on the source system and respond accordingly. You can set up jobs such that they send alerts or have runaway job limitations. I personally don't tend to use the runaway feature. Our operators notice and respond accordingly to long-running jobs.
What needs improvement?
The biggest area with room for improvement is the area that my organization benefits the most from using JAMS, and that is in custom execution methods. I happen to have a very good C# developer. Ever since we got JAMS, he has spent a lot of time talking to JAMS developers, researching the JAMS libraries, and creating custom execution methods. He's gotten very good at it. He is now able to create them and maintain them very easily, but that was hard-won knowledge. If I ever lose this developer, I would be hard-pressed to find anyone who could create JAMS custom execution methods as well as he can since there really isn't all that much help, such as documentation or information, available on how to create custom execution methods.
I really think that they could benefit greatly by being much more transparent about C# development, maybe by making a JAMS cookbook or a developer portal where they could throw ideas at each other.
One of my complaints with the marketing around JAMS is that it says things like, "It integrates with Teams". They talk about integrating with a lot of things, but marketing doesn't tell you that they are talking about JAMS running PowerShell jobs. Since PowerShell can automate things like SharePoint and Teams, that is how marketing gets away with saying it has so many integrations. JAMS doesn't have as many built-in integrations as they advertise. I think they should build more of them, and improve on the ones they have built.
For how long have I used the solution?
We purchased JAMS six years ago. I have been using it the whole time. I was involved in shopping for potential enterprise job scheduler solutions and selecting JAMS.
What do I think about the stability of the solution?
I would be hard-pressed to think of any occurrence that we have had over the last five years where JAMS has crashed, had any sort of catastrophic failures, or instability. It is a pretty rock-solid system. I am happy with it.
What do I think about the scalability of the solution?
Scalability is great. It has the ability to add agents. We are a Windows shop, and at some point, I am sure we will expand and add more Windows agents. If we were running other platforms, IBM or Unix, then there are agents for that. A company a mix of systems would do well with JAMS because of that flexibility. The ability to have multiple servers and failover servers is a great benefit. Because we are a fairly small power user, we haven't had to really take advantage of that scalability very much, but we are glad to know that it is there.
It is used extensively across the organization in all our business intelligence reporting data refreshes, data warehouse SSIS packages, file importing and exporting, and file movement. We use it for sending automated ticket creation emails to our ticketing system.
The place where I have targeted for us to extend the solution's usage is in the Artiva systems, where not all jobs are scheduled inside of JAMS. There are still some legacy jobs that are scheduled inside of the Artiva's internal job scheduler. I plan on moving jobs into JAMS and making them JAMS jobs.
How are customer service and support?
When I find room for improvement, I log a ticket with JAMS. So, I have logged plenty of tickets.
I would rate support as an eight out of 10, mainly for lack of documentation and support for the custom execution method development.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
It has eliminated monitoring tools like the job schedulers, e.g., the SQL Server scheduler and Qlik Scheduler. You need to have special skills to go and investigate a job that might be running on those schedulers. We didn't have an enterprise scheduler before JAMS. So, I can't say that it eliminated a different enterprise scheduler, but it does prevent us from having train our operators on all the various systems' schedulers. That is one of the benefits of consolidating your scheduling down to a single enterprise job scheduler; you only have to train on one tool. Once a person knows how to look at the job run history, job logs, and job definitions inside of JAMS, they don't need to know how to do that on SQL Server. They don't need to know how to research a Windows scheduled task running a batch job and know where that batch job logs its results. All of that goes away because you can just look at that in one place.
My experience at a former employer was with Tivoli and Tidal job schedulers. Tivoli and Tidal were larger, more complex, less intuitive, and less user-friendly. We also didn't have the ability to do the C# custom execution methods that we do in JAMS. Also, the price was in a completely different ballpark. Tivoli and Tidal were much more expensive.
How was the initial setup?
It was pretty straightforward. When we started with JAMS, we didn't even have SQL Server. It natively installs SQL Express for you, so you don't need to buy an SQL Server if you don't want to. You don't need to buy agents if you don't want to. You can have all the jobs running locally on the JAMS server. That is what we did for a while before we got the separate agent license. The amount of time to learn how to use the tool was not very challenging. It was pretty easy to learn.
The biggest challenge was when we saw and heard what we could do with custom execution methods. We knew we wanted to do it, but it took a long time for our developer to figure out exactly how to do it right.
What about the implementation team?
The JAMS developer and I are the administrators of the system. We do the upgrades, the custom execution method development, create a lot of the job definitions, and help train people.
There are two people that I would classify as operators. They monitor jobs. They respond to errors. They rerun failed jobs and move files. Also, if the client gave us a file named incorrectly, it would be their job to rename it, fix it, and tell the client that they did something wrong, then rerun the failed job.
There are about four other power users who create job definitions.
There are a large number of people in the company who might receive an email when a report is finished or be notified if there is a problem with a job that was created for their benefit. However, I wouldn't consider those people as users so much as they are people who benefit from the product. There might be 30 of them.
What was our ROI?
We have easily seen ROI. This is based on the fact that the number of jobs that we are running, the number of processes we have automated, and the number of new clients and processes that we've added since taking on JAMS without having to add staff has paid for itself in dividends.
What's my experience with pricing, setup cost, and licensing?
Take advantage of its scalability. You can start small. The initial cost is very reasonable. Once you have started picking up the tool and adopting it, then you can scale up from there and buy more agents.
There are annual licensing and maintenance costs. If you add agents or servers, every one you add has an additional annual cost. Then there is the basic cost of any software, which is the server hardware and operating system.
Which other solutions did I evaluate?
Yes, I evaluated Tivoli, Tidal, and several other enterprise job schedulers. It has been five years so it's hard to remember specifically which others I looked at.
What other advice do I have?
I have three examples of working very closely with enterprise job schedulers. If a company doesn't have an enterprise job scheduler, then JAMS is an easy choice. Really adopting the idea of using an enterprise job scheduler into your company culture is important. You need to move jobs out of all your other job schedulers and centralize them in JAMS.
Don't just use it to schedule jobs on one system. Don't just use it as a Windows Task Scheduler replacement. Don't just use it for batch files. Anywhere that you see a scheduler, you can replace that scheduler with JAMS. Get a good C# developer and start making your own custom execution methods.
Contact JAMS support and get your developers talking to their developers. That will help you get up to speed a lot faster.
For anyone coming off of another job scheduler, like Tivoli or Tidal, I would tell them that they have made a good choice. This solution is just as powerful and much more cost-effective.
Lean into it. Really use it. Don't just use it for this and that. Don't have your other systems and job schedulers doing their own things like exporting files and then relying on JAMS a file trigger to detect the presence of that file. Have the JAMS scheduler kick off the job that creates the file. Don't do it half-heartedly.
I would rate it as 10 out of 10. Anytime that I am geeking out with other IT guys about their systems and processes, I always end up talking about JAMS.
Which deployment model are you using for this solution?
On-premises
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Thanks for the 5-star review of JAMS! It's great to see you're enjoying the stability and scalability of JAMS over the past 5 years. Also, thanks for your feedback on creating more documentation and/or information guides on how to create custom execution methods. I have shared this information with our product team. If interested, we have a customer community, Automation Insiders, for current customers to share experiences and ideas on all types of topics. This may be a great place to start. If you ever find you need any assistance, please do not hesitate to reach out as we are always at your disposal. Thank you again!