What is our primary use case?
One of the use cases is to store Veeam backups. It's more complex than Veeam, which is simpler.
S3 is essentially object storage, a place to store data that you can set up however you want. We usually connect to S3 using the API. It requires an access key and a secret key, which we input into the application we're connecting from. Then, that application can see the S3 bucket as if it were local, and we can copy stuff into it.
What is most valuable?
The best thing about S3 is the security. I can enable encryption at rest using a KMS server, a key management server. Key management servers encrypt data, and you need the key to decrypt it. Without the key, you can't access the data. Amazon allows you to use your own servers or theirs. They offer Amazon encryption, which you decide on when you create the bucket. The bucket also has all the other features like versioning.
We can do versioning and life cycles. The biggest plus for me as a storage admin is the different tiers: standard, infrequent access, Glacier, and Glacier Deep Archive. The differences between these tiers are cost and data retrieval speed. Standard tier data is considered "hot," meaning it's accessible anytime, but it's expensive.
If you want to archive data you won't need often, or if you have ample time to retrieve a backup, the most cost-effective option is Glacier Deep Archive. It's very cheap, but the disadvantage is that when you request data, it takes up to 24 hours to become accessible.
There are no limitations on how much you can store, but the biggest limitation is that you cannot use Glacier Deep Archive from the console. You can't go into the AWS console and copy something directly into Deep Glacier, but you can do it from the command line. If you have programmatic access, there's a command to put data directly into Glacier Deep Archive.
We don't use anything AI on AWS right now. The AI stuff isn't really too useful for us yet. We're just playing around with Copilot from Microsoft, which is more on the Azure side. We haven't leveraged any AWS AI stuff yet. We've basically just been playing around with Amazon Bedrock, but haven't done anything in production with it.
What needs improvement?
There's a lot of complexity, but that's unavoidable because it needs to be versatile. You have to be able to get the data in many different ways.
For example, if you need to give just one file to someone outside the office, you can create a temporary pre-signed URL for them to access it. That takes a little bit of fiddling with the settings, but it's not really a disadvantage. It's just part of the system. There's no real way to make that easier. The more secure something is, the more complex it will be.
For how long have I used the solution?
We're not dealing with it extensively right now. We're studying migrating to AWS resources but haven't moved much yet. We've mainly been doing proof of concepts and testing. S3 is what we've been using the most, though.
How are customer service and support?
It's a little bit harder to get the customer service and support compared to Veeam. Veeam is very good, definitely a ten out of ten.
Amazon support so far has been maybe a seven out of ten.
How would you rate customer service and support?
How was the initial setup?
The initial setup is complex in the sense that you need to know what will access the data and what won't. That's the biggest issue. For example, let's say you have an EC2 instance that needs to access an S3 bucket, but everything else needs to be locked out. There are several ways to do that, like creating a bucket policy and allowing only the EC2 resource to access it. That requires creating a role.
On the other hand, if you just want a public S3 bucket where everyone has access to unprotected data, like public documentation on a product, that's very easy to set up.
So, it can be as easy or as complex as you need it to be.
What about the implementation team?
I tool care of the solution. It's me and my colleague who is more on the vSphere and storage admin side, and one of the networking guys to help with the networking portion.
Once it's set up, there's really no maintenance.
What was our ROI?
We don't expect to see ROI at the moment, but we're hoping to see some returns once we start moving data loads to the cloud. Then, we can start decommissioning a lot of the on-premises hardware. We'll need less storage and computing as more gets deployed to the cloud.
What's my experience with pricing, setup cost, and licensing?
The licensing model is complex. If you need your backup data available at all times, but don't access it frequently, there are better options. You could go with S3, but other vendors offer the same thing for much better prices. For example, Wasabi has S3-compatible hot storage and charges a lot less.
The disadvantage of Wasabi is that they don't want you accessing the data frequently. They say you can have it right away if you need it, but if you keep accessing it all the time, you're going beyond their service agreement. At that point, you'd need to go with an Amazon solution.
If you're going to store data in an archive that you don't need to access frequently and can wait up to 24 hours to retrieve, then Amazon is a good solution.
Amazon would definitely be a seven out of ten, where one is the lowest price and ten is the most expensive.
What other advice do I have?
I would recommend it as part of a good backup strategy. The 3-2-1 approach suggests having three hot copies, two cold storage copies, and one off-site copy. That off-site copy could definitely be an S3 bucket. So I would recommend it in that sense.
Overall, I would rate it a seven out of ten because it's great for general storage. If you need to access a lot of data all the time and it needs to be fast, then it's not the best solution. There are better solutions out there.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.