We are using the solution for network virtualization.
High quality features, flexible, and excellent virtualization capabilities
Pros and Cons
- "The most valuable features I have found are the Database Migration Service (DMS) for monitoring the host and routing, Route 53, and EC2 tools. The DMS is not available in any other solution that I am aware of. They have a very flexible and professional solution."
- "If you have not had previous training or studied guides it will be a little difficult to use the solution. However, the difficulty also depends on what you are using the solution for. They can improve by providing more documentation, such as tutorials and videos."
What is our primary use case?
What is most valuable?
The most valuable features I have found are the Database Migration Service (DMS) for monitoring the host and routing, Route 53, and EC2 tools. The DMS is not available in any other solution that I am aware of. They have a very flexible and professional solution.
What needs improvement?
If you have not had previous training or studied guides it will be a little difficult to use the solution. However, the difficulty also depends on what you are using the solution for. They can improve by providing more documentation, such as tutorials and videos.
For how long have I used the solution?
I have been using the solution for approximately five years.
Buyer's Guide
Amazon AWS
January 2025
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
What do I think about the stability of the solution?
I have found the stability very good.
What do I think about the scalability of the solution?
The scalability is good.
How are customer service and support?
The technical support has been fine in my experience.
What's my experience with pricing, setup cost, and licensing?
The price of the solution is reasonable.
What other advice do I have?
Amazon AWS is the most powerful tool and is at the top for cloud and for virtualization. It has many features and products. It is wonderful and I keep learning from them.
I would highly recommend this solution to others.
I rate Amazon AWS a ten out of ten.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Software Engineer at a tech services company with 1,001-5,000 employees
Detailed dashboard, easy to follow documentation, and reliable
Pros and Cons
- "This solution offers a very detailed dashboard that has some metrics, such as performance and budget."
- "In a future release, I would like to see more support for AI because it is the future."
What is our primary use case?
I am using the solution to create my own virtual servers in the cloud. We use one of the servers to deploy the database for NoSQL database on MongoDB. MongoDB allows all types of databases.
Here is a more detailed explanation. I needed to deploy a backup API that was not in the project I needed it for. To back up the API is essentially treating data from the MongoDB database. Initially, we implemented it locally and tested all the endpoints, and then we deployed it to the AWS services. We needed this to be online and to communicate with the front end, which is an angular app that says you receive the data from another database which is another NoSQL.
What is most valuable?
This solution offers a very detailed dashboard that has some metrics, such as performance and budget. You can find all the documentation on how to do almost anything. It supports multiple services where you can use Linux. Depending on your use case, you can also manage the allocations, for example, your hard drive and memory.
Additionally, the solution is user-friendly, has intuitive dashboards, and plenty of graphs and charts available.
What needs improvement?
We had some problems with bandwidth because of high usage. There were so many queries going to the API and since we were on a budget, we did not provide the needed requirements for our software. At those times the performance was really slow, I needed to log in using a remote session to check whether there is a problem and if there was I had to restart the server. If you have the required budget and you know how to customize it, I think it would be working fine.
In a future release, I would like to see more support for AI because it is the future.
For how long have I used the solution?
I have been using the solution for the past year.
What do I think about the stability of the solution?
The solution is really reliable even when we were using it as a demo to showcase to clients.
The cloud-based environment was secure for everything I used it for. It does not allow you to just log in for the remote sessions. You need to configure it to have a computer log into your accounts properly for user management.
What do I think about the scalability of the solution?
The solution is scalable, whenever you want to scale up the solution and improve it, they really offer you that opportunity. For example, increase the hardware and resources.
How are customer service and technical support?
The customer support is exceptional. I was having some problems with deploying the server and had to contact support. I began chatting with the chatbot and when it did not help me I was transferred to one of their support attendants. Once the ticket is submitted, they send you another email, in three days to check whether the problem was resolved or not. They are really helpful.
Additionally, at one of the AI summits, Amazon had a room that was filled with many technical professionals. They all had different technical backgrounds willing to give support to those who asked questions. It was really helpful.
How was the initial setup?
The installation for me was straightforward since I have some technical background. However, I did still need to read some documentation. The installation documentation is good and informative. Those who are new to the solution can search the internet for information to guide them and there are courses online to follow too.
What's my experience with pricing, setup cost, and licensing?
When I first started using the solution I used a free trial, and then we upgraded to a pay-as-you-go subscription. We have an allocated budget of $50. I am happy with the pricing because the free trial project helped me progress. In our country, there are limitations for what payment methods we can use, we do not support PayPal, and credit card transactions are delayed. Hopefully, this gets better in the future. However, in other countries, this is not a widespread problem.
What other advice do I have?
The type of deployment of the solution depends on the need of the organization. There are some solutions that we deploy for the clients that we need in a more stable environment. In this case, we use the cloud. For testing purposes for internal projects, we use our own servers within the company. I think by the IP numbers, we can request to create a certain server and they created this for us. There are some clients that require a cloud-based solution, we have this capability but it is still in testing and it is not what we use our servers for.
I would recommend this solution for companies like small companies just starting out. It would be really helpful for them.
I rate Amazon AWS a nine out of ten.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Amazon AWS
January 2025
Learn what your peers think about Amazon AWS. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
Director of Technology at a energy/utilities company with 51-200 employees
Stable, flexible, always up to date, and works well as long-term or short-term storage
Pros and Cons
- "It's a flexible solution."
- "The interface is relatively complex."
What is our primary use case?
We use this solution for everything. It's our infrastructure. You can have long-term or short-term storage. You can have elastic servers, analytical AI, machine learning services, and API services.
What is most valuable?
It's a flexible solution.
What needs improvement?
The interface is relatively complex. It's not complex when you compare it to Azure, but with some other competitors, it is a little complex.
The interface could be simplified. It's an area that needs improvement, as well as the price.
For how long have I used the solution?
I have been using this solution for five years.
We are using the latest version. It's always kept up-to-date by Amazon.
What do I think about the stability of the solution?
It's a stable solution.
What do I think about the scalability of the solution?
It's a scalable product. Everyone in our organization is using this solution. We have 100 users.
We are not sure if we are going to continue using this product. We may move to Azure or GCP. We haven't made that decision.
How are customer service and technical support?
You have support but not very much. It's all do-it-yourself and you figure it out for the most part.
You have outside consulting firms that provide the support.
Which solution did I use previously and why did I switch?
We use Azure, just for backups.
How was the initial setup?
There is nothing to install, it's cloud. It's easy.
What's my experience with pricing, setup cost, and licensing?
The prices are a bit high. But they are the first ones on the market to really do this and they have a monopoly on it.
Depending on what you get, you will have to pay for a license. For example, if you get SQL Server, which is a Microsoft product, you need to pay for a license. If you get other products, you may have to get a license. They will provide that or they will sell it to you.
In some instances, it may be, that you bring your own licenses.
Which other solutions did I evaluate?
Azure has better services for some aspects, and Google GCP has obviously got some competing products. I think each provider has its benefits, advantages, and disadvantages.
What other advice do I have?
I would rate Amazon AWS an eight out of ten.
Which deployment model are you using for this solution?
Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Amazon cloud storage options enhanced with Glacier
In case you missed it, Amazon Web Services (AWS) has enhanced their cloud services (Elastic Cloud Compute or EC2) along with storage offerings. These include Relational Database Service (RDS), DynamoDB, Elastic Block Store (EBS), and Simple Storage Service (S3). Enhancements include new functionality along with availability or reliability in the wake of recent events (outages or service disruptions). Earlier this year AWS announced their Cloud Storage Gateway solution that you can read an analysis here. More recently AWS announced provisioned IOPS among other enhancements (see AWS whats new page here).
Before announcing Glacier, options for Amazon storage services relied on general purpose S3 or EBS with other Amazon services. S3 has provided users the ability to select different availability zones (e.g. geographical regions where data is stored) along with level of reliability for different price points for their applications or services being offered.
Note that AWS S3 flexibility lends itself to individuals or organizations using it for various purposes. This ranges from storing backup or file sharing data to being used as a target for other cloud services. S3 pricing options vary depending on which availability zones you select as well as if standard or reduced redundancy. As its name implies, reduced redundancy trades lower availability recovery time objective (RTO) in exchange for lower cost per given amount of space capacity.
AWS has now announced a new class or tier of storage service called Glacier, which as its name implies moves very slow and capable of supporting large amounts of data. In other words, targeting inactive or seldom accessed data where emphasis is on ultra-low cost in exchange for a longer RTO. In exchange for an RTO that AWS is stating that it can be measured in hours, your monthly storage cost can be as low as 1 cent per GByte or about 12 cents per year per GByte plus any extra fees (See here).
Here is a note that I received from the Amazon Web Services (AWS) team:
----------------------
Dear Amazon Web Services Customer,
We are excited to announce the immediate availability of Amazon Glacier – a secure, reliable and extremely low cost storage service designed for data archiving and backup. Amazon Glacier is designed for data that is infrequently accessed, yet still important to keep for future reference. Examples include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance. With Amazon Glacier, customers can reliably and durably store large or small amounts of data for as little as $0.01/GB/month. As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.
Amazon Glacier is:
Low cost- Amazon Glacier is an extremely low-cost, pay-as-you-go storage service that can cost as little as $0.01 per gigabyte per month, irrespective of how much data you store.
Secure – Amazon Glacier supports secure transfer of your data over Secure Sockets Layer (SSL) and automatically stores data encrypted at rest using Advanced Encryption Standard (AES) 256, a secure symmetrix-key encryption standard using 256-bit encryption keys.
Durable- Amazon Glacier is designed to give average annual durability of 99.999999999% for each item stored.
Flexible -Amazon Glacier scales to meet your growing and often unpredictable storage requirements. There is no limit to the amount of data you can store in the service.
Simple- Amazon Glacier allows you to offload the administrative burdens of operating and scaling archival storage to AWS, and makes long term data archiving especially simple. You no longer need to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.
Designed for use with other Amazon Web Services – You can use AWS Import/Export to accelerate moving large amounts of data into Amazon Glacier using portable storage devices for transport. In the coming months, Amazon Simple Storage Service (Amazon S3) plans to introduce an option that will allow you to seamlessly move data between Amazon S3 and Amazon Glacier using data lifecycle policies.
Amazon Glacier is currently available in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), and Asia Pacific (Japan) Regions.
A few clicks in the AWS Management Console are all it takes to setup Amazon Glacier. You can learn more by visiting the Amazon Glacier detail page, reading Jeff Barrs blog post, or joining our September 19th webinar.
Sincerely,
The Amazon Web Services Team
----------------------
What is AWS Glacier?
Glacier is low-cost for lower performance (e.g. access time) storage suited to data applications including archiving, inactive or idle data that you are not in a hurry to retrieve. Pay as you go pricing that can be as low as $0.01 USD per GByte per month (and other optional fees may apply, see here) depending on availability zone. Availability zone or regions include US West coast (Oregon or Northern California), US East Coast (Northern Virginia), Europe (Ireland) and Asia (Tokyo).
Now what is understood should have to be discussed, however just to be safe, pity the fool who complains about signing up for AWS Glacier due to its penny per month per GByte cost and it being too slow for their iTunes or videos as you know its going to happen. Likewise, you know that some creative vendor or their surrogate is going to try to show a miss-match of AWS Glacier vs. their faster service that caters to a different usage model; it is just a matter of time.
Lets be clear, Glacier is designed for low-cost, high-capacity, slow access of infrequently accessed data such as an archive or other items. This means that you will be more than disappointed if you try to stream a video, or access a document or photo from Glacier as you would from S3 or EBS or any other cloud service. The reason being is that Glacier is designed with the premise of low-cost, high-capacity, high availability at the cost of slow access time or performance. How slow? AWS states that you may have to wait several hours to reach your data when needed, however that is the tradeoff. If you need faster access, pay more or find a different class and tier of storage service to meet that need, perhaps for those with the real need for speed, AWS SSD capabilities ;).
Here is a link to a good post over at Planforcloud.com comparing Glacier vs. S3, which is like comparing apples and oranges; however, it helps to put things into context.
In terms of functionality, Glacier security includes secure socket layer (SSL), advanced encryption standard (AES) 256 (256-bit encryption keys) data at rest encryption along with AWS identify and access management (IAM) policies.
Persistent storage designed for 99.999999999% durability with data automatically placed in different facilities on multiple devices for redundancy when data is ingested or uploaded. Self-healing is accomplished with automatic background data integrity checks and repair.
Scale and flexibility are bound by the size of your budget or credit card spending limit along with what availability zones and other options you choose. Integration with other AWS services including Import/Export where you can ship large amounts of data to Amazon using different media and mediums. Note that AWS has also made a statement of direction (SOD) that S3 will be enhanced to seamless move data in and out of Glacier using data policies.
Part of stretching budgets for organizations of all size is to avoid treating all data and applications the same (key theme of data protection modernization). This means classifying and addressing how and where different applications and data are placed on various types of servers, storage along with revisiting modernizing data protection.
While the low-cost of Amazon Glacier is an attention getter, I am looking for more than just the lowest cost, which means I am also looking for reliability, security among other things to gain and keep confidence in my cloud storage services providers. As an example, a few years ago I switched from one cloud backup provider to another not based on cost, rather functionality and ability to leverage the service more extensively. In fact, I could switch back to the other provider and save money on the monthly bills; however I would end up paying more in lost time, productivity and other costs.
What do I see as the barrier to AWS Glacier adoption?
Simple, getting vendors and other service providers to enhance their products or services to leverage the new AWS Glacier storage category. This means backup/restore, BC and DR vendors ranging from Amazon (e.g. releasing S3 to Glacier automated policy based migration), Commvault, Dell (via their acquisitions of Appassure and Quest), EMC (Avamar, Networker and other tools), HP, IBM/Tivoli, Jungledisk/Rackspace, NetApp, Symantec and others, not to mention cloud gateway providers will need to add support for this new capabilities, along with those from other providers.
As an Amazon EC2 and S3 customer, it is great to see Amazon continue to expand their cloud compute, storage, networking and application service offerings. I look forward to actually trying out Amazon Glacier for storing encrypted archive or inactive data to compliment what I am doing. Since I am not using the Amazon Cloud Storage Gateway, I am looking into how I can use Rackspace Jungledisk to manage an Amazon Glacier repository similar to how it manages my S3 stores.
Some more related reading:
Only you can prevent cloud data loss
Data protection modernization, more than swapping out media
Amazon Web Services (AWS) and the NetFlix Fix?
AWS (Amazon) storage gateway, first, second and third impressions
As of now, it looks like I will have to wait for either Jungledisk adds native support as they do today for managing my S3 storage pool today, or, the automated policy based movement between S3 and Glacier is transparently enabled.
[To view all of the links mentioned in this post, go to:http://storageioblog.com/amazon-cloud-storage-options-enhanced-with-glacier/ ]
Some updates:
http://storageioblog.com/november-2013-server-storageio-update-newsletter/
http://storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Lambda and other AWS enhancements
A few weeks ago I attended Amazon Web Service (AWS) re:Invent 2014 in Las Vegas for a few days. For those of you who have not yet attended this event, I recommend adding it to your agenda. If you have interest in compute servers, networking, storage, development tools or management of cloud (public, private, hybrid), virtualization and related topic themes, you should check out AWS re:invent.
AWS made several announcements at re:invent including many around development tools, compute and data storage services. One of those to keep an eye on is cloud based Aurora relational database service that complement existing RDS tools. Aurora is positioned as an alternative to traditional SQL based transactional databases commonly found in enterprise environments (e.g. SQL Server among others).
Some recent AWS announcements prior to re:Invent include
- AWS Adds EU (Frankfurt) Region
- Amazon Linux AMI Updates
- AWS Systems Manager for Microsoft System Center Virtual Machine Manager
- T2, the New Low-Cost, General Purpose Instance Type for Amazon EC2
- Windows Server 2012 R2 AMI Updates
- Zocalo Enterprise File Sync & Share updates (read more Zocalo here )
- AWS Management Portal for vCenter Setup Enhancements
AWS vCenter Portal
Using the AWS Management Portal for vCenter adds a plug-in within your VMware vCenter to manage your AWS infrastructure. The vCenter for AWS plug-in includes support for AWS EC2 and Virtual Machine (VM) import to migrate your VMware VMs to AWS EC2, create VPC (Virtual Private Clouds) along with subnet’s. There is no cost for the plug-in, you simply pay for the underlying AWS resources consumed (e.g. EC2, EBS, S3). Learn more about AWS Management Portal for vCenter here, and download the OVA plug-in for vCenter here.
AWS re:invent content
November 12, 2014 (Day 1) Keynote (highlight video, full keynote). This is the session where AWS SVP Andy Jassy made several announcements including Aurora relational database that complements existing RDS (Relational Data Services). In addition to Andy, the key-note sessions also included various special guests ranging from AWS customers, partners and internal people in support of the various initiatives and announcements.
November 13, 2014 (Day 2) Keynote (highlight video, full keynote). In this session, Amazon.com CTO Werner Vogels appears making announcements about the new Container and Lambda services.
AWS re:Invent announcements
Announcements and enhancements made by AWS during re:Invent include:
- Key Management Service (KMS)
- Amazon RDS for Aurora
- Amazon EC2 Container Service
- AWS Lambda
- Amazon EBS Enhancements
- Application development, deployed and life-cycle management tools
- AWS Service Catalog
- AWS CodeDeploy
- AWS CodeCommit
- AWS CodePipeline
Key Management Service (KMS)
Hardware security module (HSM) based key managed service for creating and control of encryption keys to protect security of digital assets and their keys. Integration with AWS EBS and others services including S3 and Redshift along with CloudTrail logs for regulatory, compliance and management. Learn more about AWS KMS here
AWS Database
For those who are not familiar, AWS has a suite of database related services including SQL and no SQL based, simple to transactional to Petabyte (PB) scale data warehouses for big data and analytics. AWS offers the Relational Database Service (RDS) which is a suite of different database types, instances and services. RDS instance and types include SimpleDB, MySQL, Postgress, Oracle, SQL Server and the new AWS Aurora offering (read more below). Other little data database and big data repository related offerings include DynamoDB (a non-SQL database), ElasticCache (in memory cache repository) and Redshift (large-scale data warehouse and big data repository).
In addition to database services offered by AWS, you can also combine various AWS resources including EC2 compute, EBS and other storage offerings to create your own solution. For example there are various Amazon Machine Images (AMI’s) or pre-built operating systems and database tools available with EC2 as well as via the AWS Marketplace , such as MongoDB and Couchbase among others. For those not familiar with MongoDB, Couchbase, Cassandra, Riak along with other non SQL or alternative databases and key value repositories, check out Seven Databases in Seven Weeks in my book review of it here.
Amazon RDS for Aurora
Aurora is a new relational database offering part of the AWS RDS suite of services. Positioned as an alternative to commercial high-end database, Aurora is a cost-effective database engine compatible with MySQL. AWS is claiming 5x better performance than standard MySQL with Aurora while being resilient and durable. Learn more about Aurora which will be available in early 2015 and its current preview here.
Amazon EC2 C4 instances
AWS will be adding a new C4 instance as a next generation of EC2 compute instance based on Intel Xeon E5-2666 v3 (Haswell) processors. The Intel Xeon E5-2666 v3 processors run at a clock speed of 2.9 GHz providing the highest level of EC2 performance. AWS is targeting traditional High Performance Computing (HPC) along with other compute intensive workloads including analytics, gaming, and transcoding among others. Learn more AWS EC2 instances here, and view this Server and StorageIO EC2, EBS and associated AWS primer here.
Amazon EC2 Container Service
Containers such as those via Docker have become popular to support developers rapidly build as well as deploy scalable applications. AWS has added a new feature called EC2 Container Service that supports Docker using simple API’s. In addition to supporting Docker, EC2 Container Service is a high performance scalable container management service for distributed applications deployed on a cluster of EC2 instances. Similar to other EC2 services, EC2 Container Service leverages security groups, EBS volumes and Identity Access Management (IAM) roles along with scheduling placement of containers to meet your needs. Note that AWS is not alone in adding container and docker support with Microsoft Azure also having recently made some announcements, learn more about Azure and Docker here. Learn more about EC2 container service here and more about Docker here.
AWS Lambda
In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute instances along with container service, another new service is AWS Lambda. Lambda is a service that automatically and quickly runs your applications code in response to events, activities, or other triggers. In addition to running your code, Lambda service is billed in 100 millisecond increments along with corresponding memory use vs. standard EC2 per hour billing. What this means is that instead of paying for an hour of time for your code to run, you can choose to use the Lambda service with more fine-grained consumption billing.
Lambda service can be used to have your code functions staged ready to execute. AWS Lambda can run your code in response to S3 bucket content (e.g. objects) changes, messages arriving via Kinesis streams or table updates in databases. Some examples include responding to event such as a web-site click, response to data upload (photo, image, audio, file or other object), index, stream or analyze data, receive output from a connected device (think Internet of Things IoT or Internet of Device IoD), trigger from an in-app event among others. The basic idea with Lambda is to be able to pay for only the amount of time needed to do a particular function without having to have an AWS EC2 instance dedicated to your application. Initially Lambda supports Node.js (JavaScript) based code that runs in its own isolated environment.
Various application code deployment models
Lambda service is a pay for what you consume, charges are based on the number of requests for your code function (e.g. application), amount of memory and execution time. There is a free tier for Lambda that includes 1 million requests and 400,000 GByte seconds of time per month. A GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a second. An example is your application is run 100,000 times and runs for 1 second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View various pricing models here on the AWS Lambda site that show examples for different memory sizes, times a function runs and run time.
How much memory you select for your application code determines how it can run in the AWS free tier, which is available to both existing and new customers. Lambda fees are based on the total across all of your functions starting with the code when it runs. Note that you could have from one to thousands or more different functions running in Lambda service. As of this time, AWS is showing Lambda pricing as free for the first 1 million requests, and beyond that, $0.20 per 1 million request ($0.0000002 per request) per duration. Duration is from when you code runs until it ends or otherwise terminates rounded up to the nearest 100ms. The Lambda price also depends on the amount of memory you allocated for your code. Once past the 400,000 GByte second per month free tier the fee is $0.00001667 for every GB second used.
Why use AWS Lambda vs. an EC2 instance
Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?
If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premise environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that’s where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.
However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that’s where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.
View AWS Lambda pricing along with free tier information here.
Amazon EBS Enhancements
AWS is increasing the performance and size of General Purpose SSD and Provisioned IOP’s SSD volumes. This means that you can create volumes up to 16TB and 10,000 IOP’s for AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP’s SSD volumes you can create up to 16TB for 20,000 IOP’s. General-purpose SSD volumes deliver a maximum throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified by AWS at 320MBps when attached to EBS optimized instances. Learn more about EBS capabilities here. Verify your IO size and verify AWS sizing information to avoid surprises as all IO sizes are not considered to be the same. Learn more about Provisioned IOP’s, optimized instances, EBS and EC2 fundamentals in this StorageIO AWS primer here.
Application development, deployed and life-cycle management tools
In addition to compute and storage resource enhancements, AWS has also announced several tools to support application development, configuration along with deployment (life-cycle management). These include tools that AWS uses themselves as part of building and maintaining the AWS platform services.
AWS Config (Preview e.g. early access prior to full release)
Management, reporting and monitoring capabilities including Data center infrastructure management (DCIM) for monitoring your AWS resources, configuration (including history), governance, change management and notifications. AWS Config enables similar capabilities to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics, auditing, resource and configuration analysis among other activities. Learn more about AWS Config here.
AWS Service Catalog
AWS announced a new service catalog that will be available in early 2015. This new service capability will enable administrators to create and manage catalogs of approved resources for users to use via their personalized portal. Learn more about AWS service catalog here.
AWS CodeDeploy
To support code rapid deployment automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks complexity associated with deployment when adding new features to your applications while reducing human error-prone operations. As part of the announcement, AWS mentioned that they are using CodeDeploy as part of their own applications development, maintenance, and change-management and deployment operations. While suited for at scale deployments across many instances, CodeDeploy works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.
AWS CodeCommit
For application code management, AWS will be making available in early 2015 a new service called CodeCommit. CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting standard functionalities of Git, including collaboration, you can store things from source code to binaries while working with your existing tools. Learn more about AWS CodeCommit here.
AWS CodePipeline
To support application delivery and release automation along with associated management tools, AWS is making available CodePipeline. CodePipeline is a tool (service) that supports build, checking workflow’s, code staging, testing and release to production including support for 3rd party tool integration. CodePipeline will be available in early 2015, learn more here.
What this all means
AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities.
Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.
Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace.
AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.
The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
I like the ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS.
Cloud Conversations: AWS S3 Cross Region Replication storage enhancements
Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.
The Problem, Issue, Challenge, Opportunity and Need
The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.
Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).
Understanding the challenge and designing a strategy
The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).
What is AWS S3 Cross-region replication
Highlights of AWS S3 Cross-region replication include:
- AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
- S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
- Policy based replication tied into S3 versioning and life-cycle rules
- Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
- Keeps region to region data replication and movement within AWS networks (potential cost advantage)
To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.
- Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
- As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
- As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
- Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
- Click here to see current AWS S3 fees for various regions
S3 Cross-region replication and alternative approaches
There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.
However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.
AWS S3 cross-region hands on experience (first look)
For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.
I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).
While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.
It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.
Where to learn more
Here are some links to learn more about AWS S3 and related topics
- Cross-Region Replication for Amazon S3
- Cloud conversations: If focused on cost you might miss other cloud storage benefits
- Data Protection Diaries
- Cloud Conversations: AWS overview and primer
- Eight Ways to Avoid Cloud Storage Pricing Surprises
- Cloud and Object Storage Center
- Are more than five nines of availability really possible?
- How do primary storage clouds and cloud for backup differ?
- What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.
Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
I like the new ability for moving S3 objects within AWS, however I will continue to use other tools for moving data in and out of AWS.
Cloud Conversations: AWS S3 Cross Region Replication storage enhancements
Amazon Web Services (AWS) recently among other enhancements announced new Simple Storage Service (S3) cross-region replication of objects from a bucket (e.g. container) in one region to a bucket in another region. AWS also recently enhanced Elastic Block Storage (EBS) increasing maximum performance and size of Provisioned IOPS (SSD) and General Purpose (SSD) volumes. EBS enhancements included ability to store up to 16 TBytes of data in a single volume and do 20,000 input/output operations per second (IOPS). Read more about EBS and other recent AWS server, storage I/O and application enhancements here.
The Problem, Issue, Challenge, Opportunity and Need
The challenge is being able to move data (e.g. objects) stored in AWS buckets in one region to another in a safe, secure, timely, automated, cost-effective way.
Even though AWS has a global name-space, buckets and their objects (e.g. files, data, videos, images, bit and byte streams) are stored in a specific region designated by the customer or user (AWS S3, EBS, EC2, Glacier, Regions and Availability Zone primer can be found here).
Understanding the challenge and designing a strategy
The following diagram shows the challenge and how to copy or replicate objects in an S3 bucket in one region to a destination bucket in a different region. While objects can be copied or replicated without S3 cross-region replication, that involves essentially reading your objects pulling that data out via the internet and then writing to another place. The catch is that this can add extra costs, take time, consume network bandwidth and need extra tools (Cloudberry, Cyberduck, S3fuse, S3motion, S3browser, S3 tools (not AWS) and a long list of others).
What is AWS S3 Cross-region replication
Highlights of AWS S3 Cross-region replication include:
- AWS S3 Cross region replication is as its name implies, replication of S3 objects from a bucket in one region to a destination bucket in another region.
- S3 replication of new objects added to an existing or new bucket (note new objects get replicated)
- Policy based replication tied into S3 versioning and life-cycle rules
- Quick and easy to set up for use in a matter of minutes via S3 dashboard or other interfaces
- Keeps region to region data replication and movement within AWS networks (potential cost advantage)
To activate, you simply enable versioning on a bucket, enable cross-region replication, indicate source bucket (or prefix of objects in bucket), specify destination region and target bucket name (or create one), then create or select an IAM (Identify Access Management) role and objects should be replicated.
- Some AWS S3 cross-region replication things to keep in mind (e.g. considerations):
- As with other forms of mirroring and replication if you add something on one side it gets replicated to other side
- As with other forms of mirroring and replication if you deleted something from the other side it can be deleted on both (be careful and do some testing)
- Keep costs in perspective as you still need to pay for your S3 storage at both locations as well as applicable internal data transfer and GET fees
- Click here to see current AWS S3 fees for various regions
S3 Cross-region replication and alternative approaches
There are several regions around the world and up until today AWS customers could copy, sync or replicate S3 bucket contents between AWS regions manually (or via automation) using various tools such as Cloudberry, Cyberduck, S3browser and S3motion to name just a few as well as via various gateways and other technologies. Some of those tools and technologies are open-source or free, some are freemium and some are premium for a few that also vary by interface (some with GUI, others with CLI or APIs) including ability to mount an S3 bucket as a local network drive and use tools to sync or copy.
However a catch with the above mentioned tools (among others) and approaches is that to replicate your data (e.g. objects in a bucket) can involve other AWS S3 fees. For example reading data (e.g. a GET which has a fee) from one AWS region and then copying out to the internet has fees. Likewise when copying data into another AWS S3 region (e.g. a PUT which are free) there is also the cost of storage at the destination.
AWS S3 cross-region hands on experience (first look)
For my first hands on (first look) experience with AWS cross-region replication today I enabled a bucket in the US Standard region (e.g. Northern Virginia) and created a new target destination bucket in the EU Ireland. Setup and configuration was very quick, literally just a few minutes with most of the time spent reading the text on the new AWS S3 dashboard properties configuration displays.
I selected an existing test bucket to replicate and noticed that nothing had replicated over to the other bucket until I realized that new objects would be replicated. Once some new objects were added to the source bucket within a matter of moments (e.g. few minutes) they appeared across the pond in my EU Ireland bucket. When I deleted those replicated objects from my EU Ireland bucket and switched back to my view of the source bucket in the US, those new objects were already deleted from the source. Yes, just like regular mirroring or replication, pay attention to how you have things configured (e.g. synchronized vs. contribute vs. echo of changes etc.).
While I was not able to do a solid quantifiable performance test, simply based on some quick copies and my network speed moving via S3 cross-region replication was faster than using something like s3motion with my server in the middle.
It also appears from some initial testing today that a benefit of AWS S3 cross-region replication (besides being bundled and part of AWS) is that some fees to pull data out of AWS and transfer out via the internet can be avoided.
Where to learn more
Here are some links to learn more about AWS S3 and related topics
- Cross-Region Replication for Amazon S3
- Cloud conversations: If focused on cost you might miss other cloud storage benefits
- Data Protection Diaries
- Cloud Conversations: AWS overview and primer
- Eight Ways to Avoid Cloud Storage Pricing Surprises
- Cloud and Object Storage Center
- Are more than five nines of availability really possible?
- How do primary storage clouds and cloud for backup differ?
- What’s most important to know about my cloud privacy policy?
What this all means and wrap-up
For those who are looking for a way to streamline replicating data (e.g. objects) from an AWS bucket in one region with a bucket in a different region you now have a new option. There are potential cost savings if that is your goal along with performance benefits in addition to using what ever might be working in your environment. Replicating objects provides a way of expanding your business continuance (BC), business resiliency (BR) and disaster recovery (DR) involving S3 across regions as well as a means for content cache or distribution among other possible uses.
Overall, I like this ability for moving S3 objects within AWS, however I will continue to use other tools such as S3motion and s3sfs for moving data in and out of AWS as well as among other public cloud serves and local resources.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Independent Analyst and Advisory Consultant at Server StorageIO - www.storageio.com
Amazon Web Services (AWS) and the NetFlix Fix?
I received the following note from Amazon Web Services (AWS) about an enhancement to their Elastic Compute Cloud (EC2) service that can be seen by some as an enhancement to service or perhaps by others after last weeks outages, a fix or addressing a gap in their services. Note for those not aware, you can view current AWS service status portal here.
The following is the note I received from AWS.
Announcing Multiple IP Addresses for Amazon EC2 Instances in Amazon VPC
Dear Amazon EC2 Customer,
We are excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon VPC. Instances in a VPC can be assigned one or more private IP addresses, each of which can be associated with its own Elastic IP address. With this feature you can host multiple websites, including SSL websites and certificates, on a single instance where each site has its own IP address. Private IP addresses and their associated Elastic IP addresses can be moved to other network interfaces or instances, assisting with application portability across instances.
The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.
You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged $0.005/hour for each additional EIP associated with that instance per hour on a pro rata basis.
With this release we are also lowering the charge for EIP addresses not associated with running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis. This price reduction is applicable to EIP addresses in both Amazon EC2 and Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.
To learn more about multiple IP addresses, visit the Amazon VPC User Guide. For more information about pricing for additional Elastic IP addresses on an instance, please see Amazon EC2 Pricing.
Sincerely,
The Amazon EC2 Team
We hope you enjoyed receiving this message. If you wish to remove yourself from receiving future product announcements and the monthly AWS Newsletter, please update your communication preferences.
Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message produced and distributed by Amazon Web Services, LLC, 410 Terry Ave. North, Seattle, WA 98109-5210.
End of AWS message
Either way you look at it, AWS (disclosure I’m a paying EC2 and S3 customer) is taking responsibility on their part to do what is needed to enable a resilient, flexible, scalable data infrastructure. What I mean by that is that protecting data and access to it in cloud environments is a shared responsibility including discussing what went wrong, how to fix and prevent it, as well as communicating best practices. That is both the provider or service along with those who are using those capabilities have to take some ownership and responsibility on how they get used.
For example, last week a major thunderstorms rolled across the U.S. causing large-scale power outages along the eastern seaboard of the U.S. and in particular in the Virginia area where one of Amazons availability zones (US East-1) has data centers located. Keep in mind that Amazon availability zones are made up of a collection of different physical data centers to cut or decrease chances of a single point of failure. However on June 30, 2012 during the major storms on the East coast of the U.S. something did go wrong, and as is usually the case, a chain of events resulted in or near a disaster (you can read the AWS post-mortem here).
The result is that AWS based out of the Virginia availability zone were knocked off line for a period which impacted EC2, Elastic Block Storage (EBS), Relational Database Service (RDS) and Elastic Load Balancer (ELB) capabilities for that zone. This is not the first time that the Virginia availability zone has been affected having met a disruption about a year ago. What was different about this most recent outage is that a year ago one of the marquee AWS customers NetFlix was not affected during that outage due to how they use multiple availability zones for HA. In last weeks AWS outage NetFlix customers or services were affected however not due to loss of data or systems, rather, loss of access (which to a user or consumer is the same thing). The loss of access was due to failure of elastic load balancing not being able to allow users access to other availability zones.
Consequently, if you choose to read between the lines on the above email note I received from AWS, you can either look at the new service capabilities as an enhancement, or AWS learning and improving their capabilities. Also reading between the lines you can see how some environments such as NetFlix take responsibility in how they use cloud services designing for availability, resiliency and scale with stability as opposed to simply using as a cost cutting tool.
Thus when both the provider and consumer take some responsibility for ensuring data protection and accessibility to services, there is less of a chance of service disruptions. Likewise when both parties learn from incidents or mistakes or leverage experiences, it makes for a more robust solution on a go forward basis. For those who have been around the block (or file) a few times thinking that clouds are not reliable or still immature you may have a point however think back to when your favorite or preferred platform (e.g. Mainframe, Mini, PC, client-server, iProduct, Web or other) initially appeared and teething problems or associated headaches.
IMHO AWS along with other vendors or service providers who take responsibility to publish post-mortem’s of incidents, find and fix issues, address and enhance capabilities is part of the solution for laying the groundwork for the future vs. simply playing to a near term trend theme. Likewise vendors and service providers who are reaching out and helping to educate and get their customers to take some responsibility in how they can use services for removing complexity (and cost) to enhance services as opposed to simply cutting cost and introducing risk will do better over the long run.
As I discuss in my book Cloud and Virtual Data Storage Networking (CRC Press), do not be scared of clouds, however be ready, do your homework, learn and understand what needs to be done or done differently. This means taking a shared responsibility one that the service provider should also be taking with you not to mention identifying new best practices, tools to be used along with conducting proof of concepts (POCs) to learn what to do and what not to do.
[To view all of the links mentioned in this post, go to: http://storageioblog.com/amazon-web-services-aws-and-the-netflix-fix/ ]
Some updates:
http://storageioblog.com/november-2013-server-storageio-update-newsletter/
http://storageioblog.com/fall-2013-aws-cloud-storage-compute-enhancements/
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros
sharing their opinions.
Updated: January 2025
Popular Comparisons
Microsoft Azure
Oracle Cloud Infrastructure (OCI)
Akamai Connected Cloud (Linode)
Google Cloud
Alibaba Cloud
Google Firebase
SAP S4HANA on AWS
Nutanix Cloud Clusters (NC2)
DigitalOcean
SAP HANA Enterprise Cloud
Equinix Metal
Google Compute Engine
NTT Cloud
Skytap Cloud
F5 Volterra
Buyer's Guide
Download our free Amazon AWS Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Gartner's Magic Quadrant for IaaS maintains Amazon Web Service at the top of the Leaders quadrant. Do you agree?
- PaaS solutions: Areas for improvement?
- Rackspace, Dimension Data, and others that were in last year's Challenger quadrant became Niche Players: Agree/ Disagree
- Does anybody have experience negotiating the terms and conditions with AWS?
- Which would you prefer - Amazon AWS or IBM Public Cloud?
- Do you have an Amazon AWS certification, and do you think it is important to earn one?
- Would you recommend Amazon AWS to cloud computing beginners?
- Which Amazon AWS features and services do you use the most often and why?
- How does Amazon compare to alternative cloud solutions?
- What are some smart ways to streamline AWS data transfer costs?
I can help you guys with anything that you need to ask me about the consultancy things and all of them only on aws consulting. It is because it is a great paltform and people always help me there.
Website: www.clickittech.com