Try our new research platform with insights from 80,000+ expert users
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
I like the ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
I like the new ability for moving S3 objects within AWS, however I will continue to use other tools for moving data in and out of AWS.

Cloud Conversations: AWS S3 Cross Region Replication storage enhancements

Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.

The Problem, Issue, Challenge, Opportunity and Need

The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.

Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).

Understanding the challenge and designing a strategy

The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).

What is AWS S3 Cross-region replication

Highlights of AWS S3 Cross-region replication include:

  • AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
  • S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
  • Policy based replication tied into S3 versioning and life-cycle rules
  • Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
  • Keeps region to region data replication and movement within AWS networks (potential cost advantage)

To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.

  • Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
  • As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
  • As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
  • Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
  • Click here to see current AWS S3 fees for various regions

S3 Cross-region replication and alternative approaches

There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.

However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.

AWS S3 cross-region hands on experience (first look)

For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.

I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).

While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.

It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.

Where to learn more

Here are some links to learn more about AWS S3 and related topics

What this all means and wrap-up

For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.

Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Amazon AWS
February 2025
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: February 2025.
837,501 professionals have used our research since 2012.
PeerSpot user
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Consultant
Top 20
Amazon Web Services (AWS) and the NetFlix Fix?

I received the following note from Amazon Web Services (AWS) about an enhancement to their Elastic Compute Cloud (EC2) service that can be seen by some as an enhancement to service or perhaps by others after last weeks outages, a fix or addressing a gap in their services. Note for those not aware, you can view current AWS service status portal here.

The following is the note I received from AWS.

Announcing Multiple IP Addresses for Amazon EC2 Instances in Amazon VPC
Dear Amazon EC2 Customer,

We are excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon VPC. Instances in a VPC can be assigned one or more private IP addresses, each of which can be associated with its own Elastic IP address. With this feature you can host multiple websites, including SSL websites and certificates, on a single instance where each site has its own IP address. Private IP addresses and their associated Elastic IP addresses can be moved to other network interfaces or instances, assisting with application portability across instances.

The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.

You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged $0.005/hour for each additional EIP associated with that instance per hour on a pro rata basis.

With this release we are also lowering the charge for EIP addresses not associated with running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis. This price reduction is applicable to EIP addresses in both Amazon EC2 and Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.
To learn more about multiple IP addresses, visit the Amazon VPC User Guide. For more information about pricing for additional Elastic IP addresses on an instance, please see Amazon EC2 Pricing.
Sincerely,

The Amazon EC2 Team

We hope you enjoyed receiving this message. If you wish to remove yourself from receiving future product announcements and the monthly AWS Newsletter, please update your communication preferences.

Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message produced and distributed by Amazon Web Services, LLC, 410 Terry Ave. North, Seattle, WA 98109-5210.

End of AWS message

Either way you look at it, AWS (disclosure I’m a paying EC2 and S3 customer) is taking responsibility on their part to do what is needed to enable a resilient, flexible, scalable data infrastructure. What I mean by that is that protecting data and access to it in cloud environments is a shared responsibility including discussing what went wrong, how to fix and prevent it, as well as communicating best practices. That is both the provider or service along with those who are using those capabilities have to take some ownership and responsibility on how they get used.

For example, last week a major thunderstorms rolled across the U.S. causing large-scale power outages along the eastern seaboard of the U.S. and in particular in the Virginia area where one of Amazons availability zones (US East-1) has data centers located. Keep in mind that Amazon availability zones are made up of a collection of different physical data centers to cut or decrease chances of a single point of failure. However on June 30, 2012 during the major storms on the East coast of the U.S. something did go wrong, and as is usually the case, a chain of events resulted in or near a disaster (you can read the AWS post-mortem here).

The result is that AWS based out of the Virginia availability zone were knocked off line for a period which impacted EC2, Elastic Block Storage (EBS), Relational Database Service (RDS) and Elastic Load Balancer (ELB) capabilities for that zone. This is not the first time that the Virginia availability zone has been affected having met a disruption about a year ago. What was different about this most recent outage is that a year ago one of the marquee AWS customers NetFlix was not affected during that outage due to how they use multiple availability zones for HA. In last weeks AWS outage NetFlix customers or services were affected however not due to loss of data or systems, rather, loss of access (which to a user or consumer is the same thing). The loss of access was due to failure of elastic load balancing not being able to allow users access to other availability zones.

Consequently, if you choose to read between the lines on the above email note I received from AWS, you can either look at the new service capabilities as an enhancement, or AWS learning and improving their capabilities. Also reading between the lines you can see how some environments such as NetFlix take responsibility in how they use cloud services designing for availability, resiliency and scale with stability as opposed to simply using as a cost cutting tool.

Thus when both the provider and consumer take some responsibility for ensuring data protection and accessibility to services, there is less of a chance of service disruptions. Likewise when both parties learn from incidents or mistakes or leverage experiences, it makes for a more robust solution on a go forward basis. For those who have been around the block (or file) a few times thinking that clouds are not reliable or still immature you may have a point however think back to when your favorite or preferred platform (e.g. Mainframe, Mini, PC, client-server, iProduct, Web or other) initially appeared and teething problems or associated headaches.

IMHO AWS along with other vendors or service providers who take responsibility to publish post-mortem’s of incidents, find and fix issues, address and enhance capabilities is part of the solution for laying the groundwork for the future vs. simply playing to a near term trend theme. Likewise vendors and service providers who are reaching out and helping to educate and get their customers to take some responsibility in how they can use services for removing complexity (and cost) to enhance services as opposed to simply cutting cost and introducing risk will do better over the long run.

As I discuss in my book Cloud and Virtual Data Storage Networking (CRC Press), do not be scared of clouds, however be ready, do your homework, learn and understand what needs to be done or done differently. This means taking a shared responsibility one that the service provider should also be taking with you not to mention identifying new best practices, tools to be used along with conducting proof of concepts (POCs) to learn what to do and what not to do.

[To view all of the links mentioned in this post, go to: http://storageioblog.com/amazon-web-services-aws-and-the-netflix-fix/ ]

Some updates:

http://storageioblog.com/november-2013-server-storageio-update-newsletter/

http://storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user1065 - PeerSpot reviewer
Senior Manager of Data Center at a integrator with 51-200 employees
Vendor
Amazon AWS is till dat,e the best IaaS providers on compute, storage, and availability platform

Valuable Features:

Few things I admire about this whole AWS service by Amazon are 1) Provides persistent block storage volumes 2) Excellent load balancing features for servers 3) Excellent support on all types of relational databases, for example Oracle etc. 4) Awesome repository of operating systems from Ubuntu, Slackware, Microsoft Servers etc. 5) The virtual private cloud feature is an amazing add on for the service

Room for Improvement:

Few cons of the AWS platform are- 1) Lack of .NET support 2) A bit costlier than Microsoft Azure based on computer per hour and outbound bandwidth. 3) Unavailability of middleware caching, integration and identity management.

Other Advice:

One of the first and extremely professional services launched for the cloud platform is the advent of Amazon AWS. The bundle of services provided by them such as CloudFront, CloudWatch, EC2, Simple Storage service etc undoubtedly cover all the aspects needed by the developers and system administrators to match infrastructure and scalability issues for the growing demands.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
SAP Architect at Deloitte
Real User
A stable tool with auto-scaling functionality, but lacking in system configuration documentation
Pros and Cons
  • "We like the that, within the public subnet of this solution, a new instance of the tool is launched when it detects an issue, in order to prevent interruptions in performance."
  • "We would like the system documentation for configuring this solution to be improved, in order to provide better process clarity."

What is most valuable?

We like the that, within the public subnet of this solution, a new instance of the tool is launched when it detects an issue, in order to prevent interruptions in performance.

What needs improvement?

We would like the system documentation for configuring this solution to be improved, in order to provide better process clarity.

Similarly, we would like more templates to be available to download for performance-oriented architecture, so that we can re-purpose them for our environment.

For how long have I used the solution?

We have been working with this solution for the last five years.

What do I think about the stability of the solution?

We have found this to be a stable solution.

What do I think about the scalability of the solution?

This solution allows for easy auto-scaling.

How was the initial setup?

The initial setup for this solution is straightforward.

What other advice do I have?

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
reviewer1217268 - PeerSpot reviewer
CTO & Product at a financial services firm with 11-50 employees
Real User
Easy to use with a good performance and decent technical support
Pros and Cons
  • "Technical support has been great."
  • "I'd like the solution to be more plug-and-play."

What is our primary use case?

This is a service solution for architecture. It's a cloud solution basically for anything you need. 

What is most valuable?

I'm happy with the solution.

It's very easy to use. 

The stability and performance are great.

The scalability of the product is great.

Technical support has been great.

What needs improvement?

I'd like the solution to be more plug-and-play.

For how long have I used the solution?

I've been using the solution for about ten years at this point. It's been a while.  

What do I think about the stability of the solution?

The stability is very good. The performance is great and it's quite reliable. There are no bugs or glitches. It doesn't crash or freeze.

What do I think about the scalability of the solution?

AWS scales well. It's not a problem to expand it. 

We have 100 users using the solution at this time. They are end-users and clients. 

Our plan right now is to increase usage in the future.

How are customer service and support?

I've used technical support in the past. I don't have any complaints about their services. They are quite good overall.

Which solution did I use previously and why did I switch?

We did use a different solution, however, the company decided to move to AWS.

How was the initial setup?

There's no installation involved. It's a very straightforward product.

As there is no installation process, you don't need a technical team and you don't have to do any maintenance. 

What's my experience with pricing, setup cost, and licensing?

There is a license fee that you need to pay. There are flexible payment options. For example, you can pay monthly if you want to.

What other advice do I have?

I do recommendations for the development of cloud solutions. 

I would rate the solution at an eight out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
reviewer930093 - PeerSpot reviewer
Director - Technology Operations at a educational organization with 10,001+ employees
Real User
Helpful service for a variety of applications
Pros and Cons
  • "Amazon AWS contains a lot of helpful services."
  • "Amazon AWS would be improved if it were more stable and if customer support's responses were faster."

What is our primary use case?

We use Amazon AWS for many applications as well as Amazon's native services. We have a mix of content-based workloads and traditional legacy type of applications. 

What is most valuable?

Amazon AWS contains a lot of helpful services. 

What needs improvement?

Amazon AWS would be improved if it were more stable and if customer support's responses were faster. 

For how long have I used the solution?

I have been using this solution for many years, somewhere between seven and ten. 

What do I think about the stability of the solution?

This solution has been relatively stable. We had one issue sometime back, so the infrastructure could be more resilient. 

What do I think about the scalability of the solution?

This solution is scalable. 

How are customer service and support?

I have contacted customer support and their response time could be faster. 

Which solution did I use previously and why did I switch?

We migrated to Amazon AWS from the Data Centers. 

How was the initial setup?

The installation was straightforward. The installation time varies depending on workloads. 

What about the implementation team?

I implemented through an in-house team. We have multiple teams for deployment and maintenance. 

What's my experience with pricing, setup cost, and licensing?

There is no licensing cost. 

What other advice do I have?

I would rate Amazon AWS an eight out of ten. I recommend this solution to anyone who wants to start using it. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
reviewer1584264 - PeerSpot reviewer
Sr. System Architect at a healthcare company with 10,001+ employees
Real User
Easy to install and has good storage, gateway, and documentation
Pros and Cons
  • "The storage is most valuable. The gateway and documentation are also quite good."
  • "Its price should be lower. The price for in-house usage should be different from production usage."

What is our primary use case?

We have just started to use this solution. We are using it for Amazon S3 Bucket.

What is most valuable?

The storage is most valuable. The gateway and documentation are also quite good.

What needs improvement?

Its price should be lower. The price for in-house usage should be different from production usage.

For how long have I used the solution?

I have been using Amazon AWS for around a month.

Which solution did I use previously and why did I switch?

I didn't use any other solution previously.

How was the initial setup?

It was easy to install.

What's my experience with pricing, setup cost, and licensing?

Its price should be lower. Currently, the price is the same if you are working in-house or in production. If you have to do internal testing or you are checking if things are working in-house, you need to pay for that, and the price is the same. The price for in-house usage should be different from production usage.

What other advice do I have?

It is a good solution. I would rate Amazon AWS an eight out of ten.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.
Updated: February 2025
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros sharing their opinions.