The best thing about S3 is the security. I can enable encryption at rest using a KMS server, a key management server. Key management servers encrypt data, and you need the key to decrypt it. Without the key, you can't access the data. Amazon allows you to use your own servers or theirs. They offer Amazon encryption, which you decide on when you create the bucket. The bucket also has all the other features like versioning. We can do versioning and life cycles. The biggest plus for me as a storage admin is the different tiers: standard, infrequent access, Glacier, and Glacier Deep Archive. The differences between these tiers are cost and data retrieval speed. Standard tier data is considered "hot," meaning it's accessible anytime, but it's expensive. If you want to archive data you won't need often, or if you have ample time to retrieve a backup, the most cost-effective option is Glacier Deep Archive. It's very cheap, but the disadvantage is that when you request data, it takes up to 24 hours to become accessible. There are no limitations on how much you can store, but the biggest limitation is that you cannot use Glacier Deep Archive from the console. You can't go into the AWS console and copy something directly into Deep Glacier, but you can do it from the command line. If you have programmatic access, there's a command to put data directly into Glacier Deep Archive. We don't use anything AI on AWS right now. The AI stuff isn't really too useful for us yet. We're just playing around with Copilot from Microsoft, which is more on the Azure side. We haven't leveraged any AWS AI stuff yet. We've basically just been playing around with Amazon Bedrock, but haven't done anything in production with it.