In many ways, it's like using an HPC environment but a lot more flexible. In theory, you could have many different kinds of computing systems and computers, ranging from those geared toward computational speed to larger memory machines or GPU machines.
The idea is to break your computational jobs into smaller jobs that can be run on multiple machines. Any of these virtual machines will orchestrate all of the computation, send out jobs to different machines, wait for them to be done, and then run the next process in the sequence. It's simply a way to run multiple processes on multiple machines in the cloud.
Scalability is the most valuable feature for me. I could run anywhere from 32 cores to over 2,000 cores. So it scales very well, and it's really good for situations where jobs are very heterogeneous, meaning that you have a long-running job that sometimes needs a lot of small compute-intensive machines, for example.
But then, in the second stage, you may need a few very high-memory machines. It's really good for those kinds of situations, for HPCs, where you can really customize and tailor the compute, memory, and GPU requirements for the job.
You can run as many jobs as you want, provided you pay the cost. So, it's about the scalability to run really large jobs in a really short amount of time with a very minimal setup.
One person can set up a compute cluster on AWS Batch. I don't need all the hardware resources, people to maintain those resources, or software installations.
Moreover, there is one other feature in confirmation or call confirmation where you can have templates of what you want to do and just modify those to customize it to your needs.
And these templates basically make it a lot easier for you to get started. So, if you've been doing this for a while, you probably already have a template in your toolbox, and you can use one of those to get started, but you would just customize it. So, these templates help a lot.