Our primary use case is for file automation: detecting the presence of files, moving files from one system to another, doing FTP uploads, FTP downloads, and a large number of custom execution methods. Custom execution methods are a way to create your own code that extends the JAMS toolset.
For example, in one of our systems, it has a tool that needs to be run in order to import a file into that system, which is very proprietary. However, those file import definitions are dynamic inside of the system; you could have 100 different file formats. We created a custom file import/export method for our system. The JAMS job calls the other system's API. The JAMS job definition tells it the path of the file to load and what parameters to use. It then reads and displays the remote system's API return results. Custom execution methods are the meat and potatoes of what we use JAMS for.
We have a single production JAMS server that serves as the primary JAMS node where most of our work is done. We have an agent server where the primary node issues some job commands to run on that agent. Then, we have a test JAMS server which we use when we are testing execution methods and other things. We have plans to stand up a failover server, but have not done so. The back-end database for our JAMS production system is Microsoft SQL Standard Edition and all our servers are on Windows.
Automation is subject to a volatile environment. That's reality. You have a client that provides you a file with the wrong naming convention or in the wrong format, or they are supposed to give you a file at 8:00 AM every morning, then one day they simply don't give you that file. Those are the sort of nuisances that create headaches for your production staff as they are trying to work through and detect them. Sometimes, they will fly under the radar, especially if you have a less sophisticated job scheduler running batch jobs, like Windows Task Scheduler. They run at a certain time and are expected to just work. However, maybe two days later, someone finds out that the file, which we normally get every Monday, was not presented to us. Those are the tricky little devils that will get you.
What we do when we develop a JAMS file workflow is we have certain checkpoints that we put into it. If we have a job that wants to run it at a certain time and expects a certain file to exist, we will have the job specifically check for that. If the file doesn't exist, it will create a very specific, actionable alert. We design that to go out to our file processing operators who can respond accordingly by contacting the client. When we are doing an export, if we want to run a file out of our systems, the job that runs the export could detect there were no records that day. It might report that back, then we can act on it. Or, perhaps after the export job run, you could have a follow-up job that checks to see that exactly one and only one file is available in that export destination.
You can't necessarily prevent environmental issues from happening. You can't expect every client to always do what they are supposed to do and give you what they are supposed to give you every day. However, when they don't, at least you can know about it as soon as possible and take action on it rather than finding out about it by accident sometime down the road.
It has drastically improved our capabilities to scale our automation. Before JAMS, there were a lot of manual processes. We had a couple of operators who spent all day doing that. A lot of the time, with human intervention and manual processes, it is as good as the person following a procedure. Human error is a big problem.
Shortly after we adopted JAMS, our file volume started ramping up. The number of files, reports, and other processes that we have had to automate has grown exponentially. We have been able to keep up with that load. JAMS has been able to scale up our organization's scheduling and automation without adding staff. The people who previously did these manual processes are now trained on monitoring the automation and scheduling of those processes. They only step in and respond to issues, rather than running manual procedures all day.
There are many platforms that an organization might use. We have Microsoft SQL server, Artiva, QlikView, and Qlik Sense. All those different platforms have built-in schedulers: SQL scheduler, QlikView scheduler, Artiva scheduler, and Windows scheduler. Without an enterprise scheduler, all those independent schedulers can only be coordinated by time of day. If you want to export a file at 8:00 AM, then set up a scheduled job that runs at 8:30 that loads that file into your BI tool, in theory that should work. However, that sort of time-based, unintelligent scheduling and coordination between systems falls apart when anything goes wrong. Let's say your 8:00 job should be done in five minutes and you have your 8:30 running on your BI scheduler. If that 8:00 job runs long, doesn't produce a file, or if it throws an error, then your BI scheduler doesn't know and just does what it always does. It runs its 8:30 job because there is no coordination. Now, users are wondering why they have a BI report with yesterday's data in it. With JAMS we have chain jobs together in a sequence. The first job throws an error so the second job never runs because there was a problem. An operator can resolve it and resume the sequence.
We have tried our best to consolidate our scheduling and not to use the Microsoft SQL job scheduler, BI tools, and built-in schedulers, but rather to use JAMS and create custom execution methods to schedule everything in one place.