Try our new research platform with insights from 80,000+ expert users
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
I like the ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
I like the new ability for moving S3 objects within AWS, however I will continue to use other tools for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Amazon AWS
November 2024
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
814,649 professionals have used our research since 2012.
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
Amazon Web Services (AWS) and the NetFlix Fix?

I received the following note from Amazon Web Services (AWS) about an enhancement to their Elastic Compute Cloud (EC2) service that can be seen by some as an enhancement to service or perhaps by others after last weeks outages, a fix or addressing a gap in their services. Note for those not aware, you can view current AWS service status portal here.

The following is the note I received from AWS.

Announcing Multiple IP Addresses for Amazon EC2 Instances in Amazon VPC
Dear Amazon EC2 Customer,

We are excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon VPC. Instances in a VPC can be assigned one or more private IP addresses, each of which can be associated with its own Elastic IP address. With this feature you can host multiple websites, including SSL websites and certificates, on a single instance where each site has its own IP address. Private IP addresses and their associated Elastic IP addresses can be moved to other network interfaces or instances, assisting with application portability across instances.

The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.

You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged $0.005/hour for each additional EIP associated with that instance per hour on a pro rata basis.

With this release we are also lowering the charge for EIP addresses not associated with running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis. This price reduction is applicable to EIP addresses in both Amazon EC2 and Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.
To learn more about multiple IP addresses, visit the Amazon VPC User Guide. For more information about pricing for additional Elastic IP addresses on an instance, please see Amazon EC2 Pricing.
Sincerely,

The Amazon EC2 Team

We hope you enjoyed receiving this message. If you wish to remove yourself from receiving future product announcements and the monthly AWS Newsletter, please update your communication preferences.

Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message produced and distributed by Amazon Web Services, LLC, 410 Terry Ave. North, Seattle, WA 98109-5210.

End of AWS message

Either way you look at it, AWS (disclosure I’m a paying EC2 and S3 customer) is taking responsibility on their part to do what is needed to enable a resilient, flexible, scalable data infrastructure. What I mean by that is that protecting data and access to it in cloud environments is a shared responsibility including discussing what went wrong, how to fix and prevent it, as well as communicating best practices. That is both the provider or service along with those who are using those capabilities have to take some ownership and responsibility on how they get used.

For example, last week a major thunderstorms rolled across the U.S. causing large-scale power outages along the eastern seaboard of the U.S. and in particular in the Virginia area where one of Amazons availability zones (US East-1) has data centers located. Keep in mind that Amazon availability zones are made up of a collection of different physical data centers to cut or decrease chances of a single point of failure. However on June 30, 2012 during the major storms on the East coast of the U.S. something did go wrong, and as is usually the case, a chain of events resulted in or near a disaster (you can read the AWS post-mortem here).

The result is that AWS based out of the Virginia availability zone were knocked off line for a period which impacted EC2, Elastic Block Storage (EBS), Relational Database Service (RDS) and Elastic Load Balancer (ELB) capabilities for that zone. This is not the first time that the Virginia availability zone has been affected having met a disruption about a year ago. What was different about this most recent outage is that a year ago one of the marquee AWS customers NetFlix was not affected during that outage due to how they use multiple availability zones for HA. In last weeks AWS outage NetFlix customers or services were affected however not due to loss of data or systems, rather, loss of access (which to a user or consumer is the same thing). The loss of access was due to failure of elastic load balancing not being able to allow users access to other availability zones.

Consequently, if you choose to read between the lines on the above email note I received from AWS, you can either look at the new service capabilities as an enhancement, or AWS learning and improving their capabilities. Also reading between the lines you can see how some environments such as NetFlix take responsibility in how they use cloud services designing for availability, resiliency and scale with stability as opposed to simply using as a cost cutting tool.

Thus when both the provider and consumer take some responsibility for ensuring data protection and accessibility to services, there is less of a chance of service disruptions. Likewise when both parties learn from incidents or mistakes or leverage experiences, it makes for a more robust solution on a go forward basis. For those who have been around the block (or file) a few times thinking that clouds are not reliable or still immature you may have a point however think back to when your favorite or preferred platform (e.g. Mainframe, Mini, PC, client-server, iProduct, Web or other) initially appeared and teething problems or associated headaches.

IMHO AWS along with other vendors or service providers who take responsibility to publish post-mortem’s of incidents, find and fix issues, address and enhance capabilities is part of the solution for laying the groundwork for the future vs. simply playing to a near term trend theme. Likewise vendors and service providers who are reaching out and helping to educate and get their customers to take some responsibility in how they can use services for removing complexity (and cost) to enhance services as opposed to simply cutting cost and introducing risk will do better over the long run.

As I discuss in my book Cloud and Virtual Data Storage Networking (CRC Press), do not be scared of clouds, however be ready, do your homework, learn and understand what needs to be done or done differently. This means taking a shared responsibility one that the service provider should also be taking with you not to mention identifying new best practices, tools to be used along with conducting proof of concepts (POCs) to learn what to do and what not to do.

[To view all of the links mentioned in this post, go to: http://storageioblog.com/amazon-web-services-aws-and-the-netflix-fix/ ]

Some updates:

http://storageioblog.com/november-2013-server-storageio-update-newsletter/

http://storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Founder & Managing Director Digital Solutions at a tech services company with 51-200 employees
Real User
Good support, impressive technology, and beneficial service portfolio
Pros and Cons
  • "Amazon AWS has a better portfolio. They have an impressive technology and service portfolio."
  • "The invoicing procedure of Amazon AWS needs to be improved. It can be difficult to manage."

What is most valuable?

Amazon AWS has a better portfolio. They have an impressive technology and service portfolio.

What needs improvement?

The invoicing procedure of Amazon AWS needs to be improved. It can be difficult to manage.

For how long have I used the solution?

I have used Amazon AWS within the last 12 months.

How are customer service and support?

The technical support of Amazon AWS is good.

Which solution did I use previously and why did I switch?

I have used Oracle previously, and I don't see any difference between Amazon AWS and Oracle from the stability and availability point of view.

What's my experience with pricing, setup cost, and licensing?

Amazon AWS is a bit more expensive than Oracle.

What other advice do I have?

I rate Amazon AWS an eight out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Regional Business Manager - North Latam - Public Sector - Amazon Web Services (AWS) at a tech services company with 10,001+ employees
Real User
Offers a very valuable machine learning service
Pros and Cons
  • "Machine learning is a valuable feature."
  • "The solution could be more user-friendly."

What is our primary use case?

We advise our clients on using AWS services. It has many applications; in health care with regard to patient medical history. Some use it for hosting, SAP and V-ware. Those are the most common uses for our clients. We are resellers and I'm the operations director.

What is most valuable?

I think machine learning is one of the most used and most valuable services, especially in scientific research. The solution is evolving all the time. 

What needs improvement?

Some of the services are hard to use so I think a more user-friendly interface would be helpful.

For how long have I used the solution?


What do I think about the stability of the solution?

The solution is stable. 

What do I think about the scalability of the solution?

The solution is very scalable. 

How are customer service and support?

Amazon offers different support plans. We have enterprise support and they generally get back to us within half an hour. The escalation process is very fast, because they know that there is a critical platform involved. They generally offer a high level of support. 

How was the initial setup?

The initial setup is not too complex but it's not straightforward either, somewhere in the middle. In terms of deployment time, it can be anywhere between a few minutes and a week, depending on what you need. 

What other advice do I have?

Training is critical before implementing the solution. There are very good AWS certifications like the certified practitioner, and there's a lot of free training on the AWS webpage that customers can use. Most of the training is hands-on so you can experience how things would be done in a work environment. AWS recently deployed 100 free courses on amazon.com to help people better understand their products. I would recommend looking at those.

I rate this solution nine out of 10. 

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer:
PeerSpot user
Architecture and Solutions Specialist at a marketing services firm with 10,001+ employees
Real User
Amazon AWS is fantastic.
Pros and Cons
  • "Amazon for DevOps is fantastic. Amazon has fast clouds, and the process and the Dev is very good."
  • "Amazon tools are for more mature DevOps. The process and the Dev is very good, but it doesn't compare to the ease of using the Google Cloud Platform."

What is most valuable?

Amazon for DevOps is fantastic. Amazon has fast clouds, and the process and the Dev is very good. 

What needs improvement?

Amazon tools are for more mature DevOps. The process and the Dev is very good, but it doesn't compare to the ease of using the Google Cloud Platform. Google Cloud Platform is easier for the developer since it has many automation features. You can use the many tools to automate your info or create machines. Personally, I like using both.

For how long have I used the solution?

I have been using AWS for two years.

What's my experience with pricing, setup cost, and licensing?

AWS pricing is higher than other services.

What other advice do I have?

I would rate AWS a ten out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Systems Architect at a educational organization with 1,001-5,000 employees
Real User
Flexible and good for building machine learning workloads
Pros and Cons
  • "We've built several AI ML solutions and done lots of work on the GPUs available on Amazon servers. We did a lot of work around web spidering, natural language processing, and machine learning or deep learning workloads."
  • "I think Amazon could improve some of the security or fine-grained access for metadata and many other things."

What is our primary use case?

We're using AWS for limited purposes right now. The university has its storage, servers, and large amounts of data center equipment, and the cloud fills a niche. We put things in the cloud so that others have access. But from a storage standpoint, 95 percent of the usage is entirely on-premises. We might use it more in the future, but we're trying to build up a storage ecosystem right now. We'll likely build that around some open-source solution, like Ceph or MEAN.io, or something from a popular vendor. 

RedHat has Ceph storage too, and IBM has object storage. I'm not sure what the university will go with, but those are the ones we are looking at. We're using AWS S3 for general storage and storing images. We also use AWS as a platform for building some web services and things like that. 

What is most valuable?

We've built several AI ML solutions and done lots of work on the GPUs available on Amazon servers. We did a lot of work around web spidering, natural language processing, and machine learning or deep learning workloads. 

What needs improvement?

I think Amazon could improve some of the security or fine-grained access for metadata and many other things. From a cloud standpoint, Amazon provides more ways to restrict access or provide fine-grained access to different services. For the time being, I think the ecosystem is relatively secure, but there is room for improvement. 

What do I think about the scalability of the solution?

AWS is scalable. It's serving about 150 users at my company right now. All of the users are researchers who do their own thing. Each research team manages its own partition and has fine-grained access to all the services. Small groups of around 10 to 15 people manage their own respective groups as to all the requirements associated with AWS.

How was the initial setup?

We customized our Amazon AWS deployment. The process takes about three to five hours, depending on the ecosystems we are building. It depends on whether it is related to web services or the call configuration. Some configurations take no more than half an hour. If you're doing something involving the server, you need to personally install some servers and some of the other database-related stuff.

I'm one of the AWS architects, but we have administrators who take care of the maintenance. I'm looking at some of the SNIA content, and it seems pretty good for object storage or some of the other storage-related options. I'm still trying to see which solutions are potentially more suitable for us. 

What's my experience with pricing, setup cost, and licensing?

I'm not sure about the licensing. I don't know what kind of subscription the university bought. I imagine it's similar to Cognizant, which had a usage-based mechanism. We bought yearly subscriptions for specific servers while pre-booking some of the server-based storage or computing infrastructure.

Which other solutions did I evaluate?

We've used Azure also. They are all fairly good. 

What other advice do I have?

I rate AWS eight out of 10. I used to work in Cognizant and TCS before that, and we used different cloud services, such as Amazon and Azure. If you want some kind of public cloud infrastructure, I would go with one of these or maybe Google Cloud. The university is in the process of setting up its own storage or server ecosystem. We plan to store massive amounts of video, images, and other objects, like our AI/ML workloads. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Cloud Solution Manager
Real User
Useful storage, scalable, and stable
Pros and Cons
  • "This solution is used as the basic requirement for any virtual machines use cases, the storage is used for each use case."

    What is our primary use case?

    We use the solution for many use cases, such as application hosting, virtual desktops, and disaster recovery.

    How has it helped my organization?

    Customers don't need to calculate what their future requirements need to be. They can start using Amazon AWS no matter what the requirements are and if they increase size it can support them. By using the solution they can reduce their infrastructure, which helps them reduce the cost and of their infrastructure.

    What is most valuable?

    This solution is used as the basic requirement for any virtual machines use cases, the storage is used for each use case.

    For how long have I used the solution?

    I have been using Amazon AWS for a year and a half.

    What do I think about the stability of the solution?

    The solution is stable. We have global customers on this platform. If you see the past records of Amazon AWS, there has been rarely any downtime or something happening.

    What do I think about the scalability of the solution?

    Amazon AWS is scalable., there are different tools that help to maintain the scalability of the solution. Many of the tools are free of cost. There is no restriction on how many people can use this solution.

    We have 500 customers in the Mumbai region and we plan to increase usage.

    How are customer service and support?

    There is vendor technical support but it is not required if they are taking our support. The customers can directly purchase their support. Our small customers, are purchasing AWS support and they are pleased with the support.

    How was the initial setup?

    The initial setup difficulty depends on the customer's environment. If it's a simple single virtual machine, then it's a simple setup. If it's an SAP-type workload on AWS, then it is somewhat complicated.

    If it is a small single-server implementation, it will take one day to deploy. Otherwise, from an application perspective, your team needs to handle everything and it could take beyond 15 days to one month.

    What about the implementation team?

    We have our own team that does the deployment.

    What's my experience with pricing, setup cost, and licensing?

    The price of the Virtual Desktop service from Amazon AWS could improve, it is more expensive than competitors. The pricing model we are using is pay-as-you-go. You only pay for what you use.

    The technical support from Amazon is extra and there are more than 200 services you can use that has a cost.

    What other advice do I have?

    My advice for those thinking about the implementation of Amazon AWS is to start using it, do not have fear to use these services. If you know how it works and you receive the right support, it always helps to reduce the cost and headache of IT.

    I rate Amazon AWS an eight out of ten.

    Which deployment model are you using for this solution?

    Public Cloud
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Buyer's Guide
    Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.
    Updated: November 2024
    Buyer's Guide
    Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.