For any cloud provider, the cost has to be substantially reduced, especially in the case of Amazon SageMaker, which is extremely expensive for huge workloads. In EC2, you have spot instances that cut costs tremendously, but you don't have that in Amazon SageMaker. You pay for the local usage. I would like to see better integration with GPUs. GPUs are very expensive for AWS or any cloud provider. NVIDIA has introduced options with Databricks for GPUs, so it would be interesting to see how Amazon SageMaker can parallelize GPU usage. I haven't used it to scale multiple GPUs automatically for model training. The key points are the cost and how effectively they integrate GPUs into the workload for training machine learning models. We want to see how seamless it is and how it can work. I haven't used multiple GPUs scaled automatically. For model training, the first concern is cost, and the second is how effectively they want to integrate GPUs into the workload for training machine learning models.