It's similar to AWS.
It's an infrastructure as a service.
It's similar to AWS.
It's an infrastructure as a service.
It's a cloud service, so it's always up to date.
If you compare it with AWS, it is not very friendly to use. I find the UI better to work with on AWS.
They easily provide service with Windows, but not with Linux.
I am a beginner, and only been using Microsoft Azure for a couple of months.
Microsoft Azure is pretty stable.
It's a scalable product. Because it's a cloud service, there is an infinite opportunity for scaling.
We only have one or two people who are using this solution in our organization.
I reached out to support once, but they were not that quick to respond.
Technical support could be faster.
I have experience with AWS and Microsoft Azure is not that friendly. It is a bit complicated compared to AWS.
It's a cloud service. There is no installation.
It is expensive, but it is less expensive than AWS.
Even with it being cheaper than AWS, the price could be cheaper.
It is similar to AWS, where it is on-demand and is billed monthly.
We are not currently using this solution and we are not sure if we will be using it in the future.
For those who are connected to the Windows Operating System, I would recommend this product. However, I would not recommend it for a Linux environment.
I would rate this solution a seven out of ten.
We use this solution to host applications for our customers. We're implementers, we support our clients.
We have 32 customers using this solution. We have a technical team that handles all maintenance-related issues. We have multiple teams in place for each and everything. For each platform, we have an integrated team in place.
Microsoft Azure is flexible and the performance is great.
From a security perspective, it could be improved. A little bit of automation would also be nice.
I have been using this solution for four years.
Microsoft Azure is both stable and scalable.
The technical support could be faster.
Overall, the initial setup is quite straightforward. Installation takes roughly four to five days.
Overall, on a scale from one to ten, I would give this solution a rating of eight.
We sell Microsoft products, and licensing that is involved in an Open License.
We sell a variety of solutions to our customers.
Microsoft Azure is a very good solution for many customers. We prefer to offer hybrid solutions to our smaller-sized customers.
The online will be through Azure or 365 and the databases we prefer locally.
It is used with Exchange, all of the mail, team, and the share products such as SharePoint.
Also for security with AI, which we are still testing.
Microsoft Azure has a large scope, it can do many things.
Everything is very easy to work with.
I believe that some of the services need to be available on the on-premises version and not only based on the cloud.
There are some security issues in the cloud that cannot be solved. Some countries will not allow you to store certain types of data on the cloud.
We started using Microsoft 365 in 2009.
The stability needs improvement. At times when there is a problem in one area, everything gets stopped, which is not good.
If the problem is the bandwidth and you have a number of redundancies or lines that link to Azure, you will have few options in Azure.
it is a scalable solution.
We haven't had a lot of issues. In our company, we prefer to learn about each product, and by doing this, we have fewer issues.
The installation is very easy.
Most areas of the application can be installed in a few minutes, but the configuration is a different story.
The configuration is dependent on the demands or the environment.
it can take hours or it can take days.
Most of the pricing from Microsoft is reasonable.
The prices are very good for the services.
If Microsoft Azure meets the demands of the company, we would recommend it.
Other companies, may require products like Amazon, or they use their own data center. It depends on the company.
I would rate most of the areas in Microsoft Azure an eight out of ten.
Within our organization, there are roughly 15 people using this solution.
It's a great solution. It's so customizable. Every user can create dashboards to suit their needs. We can create and share them with our teammates easily, too.
Microsoft Azure is also very user-friendly.
I honestly can't think of anything that needs to be improved.
I have been using this solution for five years.
Microsoft Azure is both scalable and stable.
The technical support is very responsive.
I evaluated Amazon and Google Cloud.
The initial setup was quite straightforward. Deployment depends on your requirements. Typically, I can deploy it within a weeks time.
There are monthly and yearly payment plans. We save more in the long run with the yearly option.
I would absolutely recommend this solution to other users.
Microsoft Azure has all of the latest cloud services with all high availability and scalability. It's also very secure. The cost is more or less the same across all cloud service providers, too.
Overall, on a scale from one to ten, I would give this solution a rating of nine.
Due to the pandemic, I haven't been able to utilize their full resources.This has made it complicated to scale up. I hope this will be resolved after the pandemic.
The primary use case of this solution is generally for infrastructure service, and also for VC-migrations and district modernization using the platform service. We're a consulting firm so we use this solution and also deploy it for our customers. I'm an architect/project manager and we are partners of Microsoft.
The main beneficial features the product provides are security, scalability, and elasticity.
The technical support could be improved. When we leave tickets, it can take some days before the issue is dealt with.
I've been using this solution for four years.
We haven't had any issues with stability in the last 12 months, prior to that there were some small problems.
The scalability is good, most of our clients are enterprise size organizations.
The initial setup is straightforward, the environment is mature with a lot of documentation.
The price of the solution is okay although it depends on the region of the deployment.
A company should look at the suitability for their use case before choosing a solution.
I rate this solution an eight out of 10.
It is so huge and so powerful. The best thing is the possibilities of things that you can actually do with it. If you do it right, you can work or host your stuff a lot cheaper than traditionally.
Its security is good, and it also reduces the strain on internal IT.
I would like to see improved migration tools. It is improving week by week. They just need to make sure that they keep up with the new functionality provided in other clouds.
I have been using this solution for nine years.
I have not been overly impressed with stability in the last years.
By definition, it is scalable. Indirectly, one of our customers probably has more than 100,000 users.
If you take the normal technical support, then it is okay. If you pay for premium technical support, then it is really good.
Its initial setup is straightforward. You don't have to do anything. It is all done for you.
I am a Microsoft partner, so I would, of course, recommend it. I would recommend Azure, then AWS, and then Google.
I would rate Microsoft Azure a nine out of ten.
All development and pre-production scenarios are now under pay as you go, and are not open all the time, in that way we’re saving a lot of resources, money and we gain fast flexibility to grant new capabilities, computing power, and PoC scenarios.
All of this without compromising production workloads and overall computing power, and any investment.
Software designed network capabilities, flexible computing, and managed storage. All of them together make a hybrid datacenter design more flexible for users and IT pros.
SDN capabilities make anyone able to manage and organize a virtualized network in as many levels (VNets and subnets) as you want, securing aces as well. You can organize many VNets and easily interconnect them. You have several and easy ways to connect your virtual DC in Azure with your on-premise DC -- making it easy to have hybrid environments.
Managed storage capabilities, which create a very simple way to create, copy, and replicate local or geo-replicate, it's very simple to assign workloads.
All classic storage configuration settings are now managed by the platform. Huge granularity of compute availability makes it really easy to get appropriate sizing or to change the sizing of actual or future workloads.
One of the most important areas for improvement is the administrative part of management. It‘s difficult to manage, all aspects of Azure invoicing, and further pricing vs usage comparison and capacity to move under usage resources to a better positioned.
In this matter, it’s necessary to use third party products or to use good self-management economic tools
Sometimes in the management portal, not in the workloads.
Not at the current situation. Not all workloads are working in Azure.
In our case, tech support was requested only two times, regarding Azure AD integration issues and special domain resolution issues. It was solved in a good way.
I want to mention a special domain resolution case. It was not easy to solve and was difficult to find a escalation engineer in order to understand the “problem” to fix it.
Because of better integration with AD and Azure AD, Office 365, and IaaS and PaaS Services.
In general, it was straightforward, but was really well analyzed and planned in order to minimize possible problems and complexity.
It’s a good idea to use BYOL if you have an EA. It’s a really noticeable cost reduction.
It’s also interesting to analyze carefully all invoicing costs and workload usages -- to better fit costing scenarios.
Yes, we analyzed Amazon AWS and Oracle Cloud.
Be careful, not all workloads are interesting or cost viability to move to Azure as is. In most cases, it will be necessary an important transformation to better fit the Azure ecosystem.
Focus on a first project in order to test all aspects related with platform, providers, own tech capabilities, costs -- that will give you all the tools to decide future plans.
When I first had the idea to build https://report-uri.io, the biggest thing that jumped out at me was that there could be potentially huge amounts of inbound data that would need to be logged, stored and queried in an efficient manner. Doing some quick research it's obvious that most of the time, sites shouldn't really be generating CSP or HPKP violation reports, or so I thought. Once you have setup and refined your policy, you'd expect not to be getting any reports at all unless there was a problem, but this turned out not to be the case. Even excluding things like malvertising, ad-injectors and advertisers serving up http adverts on https pages, which I see a steady stream of constantly, there were things like policy misconfiguration and a genuine XSS attack that could also cause reports to be generated and sent, potentially in huge numbers. Every browser that visits a page with a violation would send a report and there could, and regularly is, multiple violations on a single page. Multiplied by a few heavily trafficked sites and you could very quickly have hundreds if not thousands of reports flooding in every single minute.
My first thought, as is fairly typical when one thinks 'I need a database', was towards the time tested SQL Server (or MySQL depending on your preference). Having had plenty of interactions with SQL Server in the past, I knew that it was more than capable of handling the simple requirements of a site like this. That said, I was also aware that the requirements of running a high performance and highly available database can be quite demanding. I knew I was going to want someone else to take care of this for me so I started looking around at different cloud providers. It became apparent pretty quickly that SQL Server in the cloud was fairly pricey for the budget I had in mind for the site!
SQL Azure was coming in at between £46 and £92 a month for a database capable of handling just a few thousand transactions a minute. Relatively cheap to some I have no doubt, but considering that all I'd looked at so far was the cost of the database, it wasn't a great start. Amazon also have their own offering of various flavours of RDBMS hosting but again, for a reasonable level of throughput and performance, I was looking at starting prices in the £40 - £50 a month region just to meet some basic needs.
My largest concern with having a fixed throughput would be the easy ability for an attacker to saturate it given the nature of the site. If the database is only provisioned for 5,000 transactions per minute, the number of inbound reports, queries against the data and my session store (more on that in another blog) could be quite demanding and if the database becomes unavailable, the whole site stops working. I needed something without the throughput restrictions and a lot cheaper.
Having used MongoDB for one of my previous projects the next logical step was to look and see what what was available in terms of NoSQL databases. Again, the hosted solutions seemed to be fairly pricey and were constrained by the typical CPU/RAM tiers or just a given performance metric. With great database as a service offerings from both Amazon and Microsoft in the form of DynamoDB and Table Storage respectively, I fired up a small test on both to try them out. One of the first things that cropped up with DynamoDB was the provisioned throughput again. You aren't actually billed for the transactions you make, you're billed to have a maximum available throughput after which transactions will start to fail. If you don't use them, you're still paying for them, but as soon as you go over the limit, you're in trouble. This means that you'd need to provision a good portion above your average requirements to be able to handle bursts in traffic.
Still, it's a little cheaper at ~£30 a month for the equivalent level of throughput as the SQL Server database mentioned above, but, we still have that maximum throughput limit. Microsoft do things a little differently with Table Storage in Azure and you're only billed for the transactions you actually use, there is no concept of provisioning for throughput. Each storage account can use as much or as little of the of the scalability limits as is required, and you never pay any more or less, just the per transaction cost.
Having been fairly impressed with my initial testing of Table Storage, I decided to throw some numbers on a piece of paper and see what the costs were going to come out at. Each storage account has a performance target of 20,000 transactions per second. Yes, 20,000 per second! That means that my application can perform up to this limit with 1 restriction. There is a 2,000 transaction per second target on a Partition, which is similar to the concept of a table in a traditional relational database. This shouldn't be a problem as long as the data is partitioned properly, a note for later on. Beyond this though, there aren't any other limitations. If you make 1 transaction in a second you pay the cost of 1 transaction, if you make 1,000 transactions in a second you pay the cost for 1,000 transactions. There are no penalties or additional costs as your throughput increases. The really staggering part is that the cost of a single transaction is £0.000000022, or, to make that a bit easier to get your head around, £0.022 per 1,000,000 transactions. Not only is the incredibly low cost really attractive here, the requirements of my application don't really fit very will with being fixed into a set throughput limit, and Table Storage does away with that.
Beyond this, the only additional cost, like all other providers, is storage space for the database and outbound bandwidth, both of which are again billed based on exactly what you use without any limits or requirements to provision allowances. Data storage is billed at £0.0581/GB/month and the first 5GB of outbound bandwidth is free with a cost of £0.0532/GB after that.
To sum all of this up with a really simple example, I drew up the following.
To store 5Gb of data, with 5Gb of egress and to issue 10 million transactions against that data would cost: £0.5105 per month. That's less money that I lose down the side of the couch each month!
Even if we get really silly with these numbers and put 100Gb in the database with 100Gb of egress and issue 200 million transactions against the data, we're still only talking £15.264 per month! That equates to an average of about 4,629 transactions per minute, a fraction of any other quote from other providers and proved attractive enough to tip the balance in favour of Azure Table Storage.
Well, there isn't really a catch, as such, but Table Storage does have a very limited feature set when compared to something like SQL Server. That's no to say it's a bad thing, but it can be difficult not having some of the things that you're typically used to. You can read up much more on the difference between the two in Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted. There are no foreign keys for example, joins and stored procedures don't exist either, but the biggest thing for me to get my head around was the lack of a row count feature. In Table Storage if you want to keep track of your row count, you have to keep track of it yourself. If you don't keep track of your row count the only way to obtain it is to query out your entire dataset and count the records in it. That's an incredibly slow, inefficient and arduous task! In coming blogs I'm going to be covering a lot of the problems that I hit whilst trying to adapt to using Table Storage and how I adapted my implementation of the service to get the best possible performance and scale out of it. Keeping track of the count of incoming reports, querying against potentially huge datasets efficiently, offloading my PHP session storage to Azure so that I could have truly ephemeral application servers behind my load balancers and much, much more.
Writing such an article today shall not miss the Azure DocumentDB, especially when you talk about NoSQL. Table storage is not real NoSQL. It is just a massive-scale Key-Value store.