The scalability of the product is an area of concern where improvements are required. Sometimes, the support team is not aware of the issues that the tool's customers face. I feel that from any improvement perspective, the support team should be aware of the product-related issues that could arise from the customers' end.
Learn what your peers think about Microsoft Azure Block Storage. Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
Improving the pricing structure would be beneficial. If the pricing could be more flexible and reduced, Microsoft Azure Block Storage would likely see increased usage.
The user interface is very difficult. It is not user-friendly. We need time to understand the product and create a container. It would be nice if the user interface were made simpler. The documentation is difficult to understand. There are no videos on the website. It is difficult for a new user to understand the solution.
Enterprise Architectural Design and Quality Assurance at a consultancy with 1-10 employees
Real User
Top 10
2023-04-21T15:02:07Z
Apr 21, 2023
Microsoft Azure Block Storage could improve the SFTP. The SFTP can be used for exchanging data between two parties and it works but Microsoft is new to this market and they could be a lot better in this area with its features. The querying could be improved in the storage system. SQL is a fundamental technology but there are only partition rules and row IDs, and it is slow to query the data. The day and time stamps are not indexed in any way which makes it very slow. Additionally, if you wanted to remove all the data over a month, there is no option to do so. There are areas that the solution can improve on in the future.
I encountered a challenge where certain metadata was located on a secured network that could only be accessed through a VPN. As there was no direct connector available to reach the private networks that require VPN access, we had to rely on manual effort to access the data. This could potentially be addressed as an area of improvement, especially with regard to accessibility issues where data is only accessible through a VPN connection. It appears that the solution we are working with is already in a beta version, and we have been managing a large volume of files by pulling them in at fixed schedules to determine whether a file is new or has been modified, we need to obtain the last modified date and compare it with the previous value. However, there is a new concept of a rolling window that can automatically detect newly arrived files and push them into downstream processes. From the logs I have seen, it appears that Microsoft is actively working on simplifying the file management process and reducing the need for engineers to manually pull files using specific logic. This includes developing a feature that automatically identifies new files and distinguishes them from old ones based on their metadata. It would be helpful to have this feature implemented.
I have used Amazon's solutions which are better than Microsoft's. However, since I have predominately Microsoft solutions it is a better choice for me to use another Microsoft solution.
The solution should also support unstructured data. There is a different file format for unstructured data. It would also be good if the solution notification features.
The performance of Microsoft Azure Block Storage needs improvement because it's laggy. My company used it in different places, including the Microsoft browser, but it still lags. Pricing for the product is another area for improvement. Microsoft Azure Block Storage also has a lot of limitations on file sizes. The rendering and loading times also need improvement. Sometimes, Microsoft Azure Block Storage crashes, an issue that needs to be taken care of promptly, but Microsoft hasn't been able to do that for a very long time.
Cloud Solution Architect at a computer software company with 5,001-10,000 employees
Real User
2022-11-21T15:54:39Z
Nov 21, 2022
There is a drawback or limitation to the GRS storage feature because depending on the amount of data, it could take a lot of time. There are a lot of steps from the application perspective, so it's not easy and straightforward from a business continuity perspective. That is one of the limitations I see in the solution. It's about the time it takes to revert back. The failover takes a couple of minutes but then it takes a lot of time to fail it back to the primary region again, so you have to do a synchronization again. That's a bit of a challenge because if it contains a lot of data in terabytes, the cost to do a replication again but reverse the direction from secondary to primary is a bit extra. Obviously, though, this only happens when there is an out, when a DR is involved, so it's not very often that companies would come across this. It's only in the worst case scenario. It's not a big thing, but some companies have limited time for their DR, like there's a RPO, or a recovery point objective and recovery time objective, that is very short. In that case, this product may not be suitable or may not meet their customer's requirements. That is something that Microsoft should focus on, to bring the time down or make that process, the DR process, a bit easier for this product. Maybe as they continually improve this product they can add improvements in this aspect to reduce that DR time, recovery time, for the solution.
Encryption, data analytics and behaviour analytics can be improved. Encryption and usage analytics optimization, built into the setup, can take it to the next level.
The security is poor, and the sharing model is confusing, conceptually. I'd prefer not to use it. I'd like a user-friendly web interface and a better security model.
The implementation can be quite technical. We would like the solution to lower the costs as much as possible. It’s always better for the customer if it is less expensive.
One thing that needs improvement is authentication. They need to improve the integrated Azure Active Directory for the enterprise level. For single sign-on, we can try any authentication or portal for Block Azure Function AKS. For example, if you're an administrator or user contributor, you will generate a token, then your internal middleware connects to any Azure Cloud service. You need to generate different credentials for each service. We cannot use the same token. Some services like Azure Key Vault support a single token for access that you should be able to use for the Kubernetes site, but some services are not supported. Authentication should be centralized. My understanding is that the data on this file path is streaming. Whenever you get this data, it is converted to streaming ByteArray and Base64. The file path is another security vulnerability. Azure Block Storage is mainly used for streaming data nowadays. Companies are moving to digital platforms. They stream data from IoT, mobile, offline sources, and other systems. There are different styles and formats, including unstructured, semi-structured, relational, and platform data, so we cannot use a single database for all requirements. We cannot say to a client, "Sorry, I need only to support this product in JSON." If we say that, competitors will dominate us. We must be prepared to accept any kind of free input or solution from clients. Then the Block supports any semi-structured data or structured data. If you go with File Storage, queuing or messaging will support storage, and the Block will support videos or images.
System & Security Engineer at Connex Information Technologies
Real User
2022-05-29T13:12:09Z
May 29, 2022
There are capacity limitations that could be expanded. When freeing the backup capacity on our disk, we have to create another block storage and integrate with that. The limitations may be an issue with the customer. It should be more flexible for customers. For example, if a customer needs 10TB of capacity, it should be flexible for the customer to expand to that capacity.
When I installed Azure Explorer on my system, I connected easily, and, for uploading data and uploading any value or any files from there, it was very easy. However, when it's there in Azure Blob, Azure Portal, then uploading, downloading, or viewing the file is a little bit difficult. That's the part they need to improve. Downloading and uploading could be easier. They could create more readability of files. We can use other solutions in desktop versions and then we can connect those Azure files. However, if anyone is working maybe in Linux or other things, then the Azure Portal is best for them. The readability is easier.
I cannot recall coming across any missing features. I have experience in resolving various issues, however, troubleshooting can be sometimes very difficult as items are not very well documented. If the documentation could be improved then it would be very good. Right now, if you face a question, you have to guess how it works or you have to test your hypothesis. This would be great for newer, more basic users.
Director - Infrastructure Solution Architect at a tech services company with 51-200 employees
Real User
2020-09-27T04:09:55Z
Sep 27, 2020
We need to do versioning each and every time, that is, check-in and check-out. If a person forgets to do that, then there could be an issue in the replacement of that particular object in the storage. It will be great if it can be automatically managed.
Co-Founder at a tech services company with 1-10 employees
Real User
2020-04-19T07:40:00Z
Apr 19, 2020
In terms of improvement, it could use better integration with non-propriety tools. Azure pushes you down a data factory road for integrations but data factory isn't always a scalable, viable solution, particularly when using large numbers ofconnections to IoT solutions.
There is a folder inside the block storage. I am not able to delete the folder completely from the portal. I need to go delete the file inside and then I will be able to delete the folders. We don't delete the other tools from the folder directly. There's no folder deletion option in Azure Block Storage. It would be nice if they could update this.
One of the biggest challenges on Azure infrastructure is that sometimes we don't get what we want. Either we have to wait for it or contact support because something is not working for us. Most of the time it works fine. But sometimes, when we want something critical, apart from block storage — we want to increase the capacity of the VM or we want to increase the cores of the VM — we have to wait. We get a response from Microsoft from a North American location. We want to host everything within the UAE. But we are redirected through North America or Europe. That is a challenge with Azure. It's the system availability that is the difficult point. Also, billing is an issue. We have had a lot of problems when it comes to billing. We started working with minimal resources and we opted for pay-as-you-go mode. Microsoft needs to invest in some resources to improve the billing part. It's not about the money, but the visibility. I see a lot of invoices, but I don't see whether we have paid them or not. And if we have paid, I don't see when it was paid and through which credit card or account we paid. I see invoices, but I don't see the information related to those invoices.
Vice President - Data Architect at a financial services firm with 10,001+ employees
Real User
2020-03-04T08:49:27Z
Mar 4, 2020
Azure had the Blob version prior to Block. Azure also realized that Blob Storage did not really have any main space. So, that was a limitation of the Azure Blob product. But that is also the reason why they decided to develop the Gen2 version, which puts back all the right structure for the storage and capacity. So, it is good to explore Gen2 as another solution. I think most of the users would eventually move from Blob Storage to Gen2 only because of the space limitations of Blob. You also do not see the kind of folder search capability in Blob that you might expect. Like when you are in windows, you can just go to a particular folder and you can browse within that folder and you can go into subfolders and search there as well. That feature is kind of missing in the Blob product. So the inability to easily search subfolders and also the space limitation of Blob are really the biggest limitations of the product. What I would like to see in the next release — and I am not really sure if it is possible in cloud solutions — but it would be nice to provide a feature for being able to distribute large files.
Uploading of data could be improved. Currently, the only way you can upload your data is by using their storage explorer. You need to use a command line to do that. Then, maybe you can go online and you can use their explorer to check to see if it's there. If they had a third-party application to handle the upload, it would be better. Many people aren't familiar with command lines for uploading data. It makes it more difficult for many users. The pricing is a bit high.
Product Manager at a tech services company with 201-500 employees
Reseller
2020-01-12T07:22:00Z
Jan 12, 2020
I would like to see an improvement in the networking and cost transparency part. What I mean by this, is that we provide certain workflows for our customers and we usually calculate what is local, but even then it's not really upfront. We must know what the usage will be in DMs. So the price is different, and that is where provisioning and cost transparency comes in. We must know what it will cost in the OPEX work that we do. We do have some cost analysis, overdose cloud, but still, it's not always the right amount that's reflected for our customers for each application or each analysis that they want to do. The invoice type is therefore different for each client.
Massively scalable cloud storage for your applications
Store and help protect your data. Get durable, highly available data storage across the globe and pay only for what you use.
Microsoft Azure Block Storage needs to improve its migration and reporting processes.
The platform's technical support services could be enhanced. They could provide better resolution and improve the response time.
Microsoft Azure Block Storage should improve its stability.
We cannot properly check the daily cost in the Azure portal. I'm going to raise a ticket to the Microsoft team as it is an Azure portal problem.
The scalability of the product is an area of concern where improvements are required. Sometimes, the support team is not aware of the issues that the tool's customers face. I feel that from any improvement perspective, the support team should be aware of the product-related issues that could arise from the customers' end.
The tool could be cheaper.
Improving the pricing structure would be beneficial. If the pricing could be more flexible and reduced, Microsoft Azure Block Storage would likely see increased usage.
The user interface is very difficult. It is not user-friendly. We need time to understand the product and create a container. It would be nice if the user interface were made simpler. The documentation is difficult to understand. There are no videos on the website. It is difficult for a new user to understand the solution.
Microsoft Azure Block Storage's stability needs improvement.
Microsoft Azure Block Storage could improve the SFTP. The SFTP can be used for exchanging data between two parties and it works but Microsoft is new to this market and they could be a lot better in this area with its features. The querying could be improved in the storage system. SQL is a fundamental technology but there are only partition rules and row IDs, and it is slow to query the data. The day and time stamps are not indexed in any way which makes it very slow. Additionally, if you wanted to remove all the data over a month, there is no option to do so. There are areas that the solution can improve on in the future.
I encountered a challenge where certain metadata was located on a secured network that could only be accessed through a VPN. As there was no direct connector available to reach the private networks that require VPN access, we had to rely on manual effort to access the data. This could potentially be addressed as an area of improvement, especially with regard to accessibility issues where data is only accessible through a VPN connection. It appears that the solution we are working with is already in a beta version, and we have been managing a large volume of files by pulling them in at fixed schedules to determine whether a file is new or has been modified, we need to obtain the last modified date and compare it with the previous value. However, there is a new concept of a rolling window that can automatically detect newly arrived files and push them into downstream processes. From the logs I have seen, it appears that Microsoft is actively working on simplifying the file management process and reducing the need for engineers to manually pull files using specific logic. This includes developing a feature that automatically identifies new files and distinguishes them from old ones based on their metadata. It would be helpful to have this feature implemented.
I have used Amazon's solutions which are better than Microsoft's. However, since I have predominately Microsoft solutions it is a better choice for me to use another Microsoft solution.
The solution should also support unstructured data. There is a different file format for unstructured data. It would also be good if the solution notification features.
The performance of Microsoft Azure Block Storage needs improvement because it's laggy. My company used it in different places, including the Microsoft browser, but it still lags. Pricing for the product is another area for improvement. Microsoft Azure Block Storage also has a lot of limitations on file sizes. The rendering and loading times also need improvement. Sometimes, Microsoft Azure Block Storage crashes, an issue that needs to be taken care of promptly, but Microsoft hasn't been able to do that for a very long time.
Although cheaper than Amazon, licensing expenses for Block Storage are pretty high.
There have been problems with integration, but it's getting better.
There is a drawback or limitation to the GRS storage feature because depending on the amount of data, it could take a lot of time. There are a lot of steps from the application perspective, so it's not easy and straightforward from a business continuity perspective. That is one of the limitations I see in the solution. It's about the time it takes to revert back. The failover takes a couple of minutes but then it takes a lot of time to fail it back to the primary region again, so you have to do a synchronization again. That's a bit of a challenge because if it contains a lot of data in terabytes, the cost to do a replication again but reverse the direction from secondary to primary is a bit extra. Obviously, though, this only happens when there is an out, when a DR is involved, so it's not very often that companies would come across this. It's only in the worst case scenario. It's not a big thing, but some companies have limited time for their DR, like there's a RPO, or a recovery point objective and recovery time objective, that is very short. In that case, this product may not be suitable or may not meet their customer's requirements. That is something that Microsoft should focus on, to bring the time down or make that process, the DR process, a bit easier for this product. Maybe as they continually improve this product they can add improvements in this aspect to reduce that DR time, recovery time, for the solution.
The solution can be improved by including quicker hard drive access and larger bandwidth as part of the standard licensing fee.
Encryption, data analytics and behaviour analytics can be improved. Encryption and usage analytics optimization, built into the setup, can take it to the next level.
The security is poor, and the sharing model is confusing, conceptually. I'd prefer not to use it. I'd like a user-friendly web interface and a better security model.
The implementation can be quite technical. We would like the solution to lower the costs as much as possible. It’s always better for the customer if it is less expensive.
One thing that needs improvement is authentication. They need to improve the integrated Azure Active Directory for the enterprise level. For single sign-on, we can try any authentication or portal for Block Azure Function AKS. For example, if you're an administrator or user contributor, you will generate a token, then your internal middleware connects to any Azure Cloud service. You need to generate different credentials for each service. We cannot use the same token. Some services like Azure Key Vault support a single token for access that you should be able to use for the Kubernetes site, but some services are not supported. Authentication should be centralized. My understanding is that the data on this file path is streaming. Whenever you get this data, it is converted to streaming ByteArray and Base64. The file path is another security vulnerability. Azure Block Storage is mainly used for streaming data nowadays. Companies are moving to digital platforms. They stream data from IoT, mobile, offline sources, and other systems. There are different styles and formats, including unstructured, semi-structured, relational, and platform data, so we cannot use a single database for all requirements. We cannot say to a client, "Sorry, I need only to support this product in JSON." If we say that, competitors will dominate us. We must be prepared to accept any kind of free input or solution from clients. Then the Block supports any semi-structured data or structured data. If you go with File Storage, queuing or messaging will support storage, and the Block will support videos or images.
The solution could have better archival policies and localization of data.
There are capacity limitations that could be expanded. When freeing the backup capacity on our disk, we have to create another block storage and integrate with that. The limitations may be an issue with the customer. It should be more flexible for customers. For example, if a customer needs 10TB of capacity, it should be flexible for the customer to expand to that capacity.
When I installed Azure Explorer on my system, I connected easily, and, for uploading data and uploading any value or any files from there, it was very easy. However, when it's there in Azure Blob, Azure Portal, then uploading, downloading, or viewing the file is a little bit difficult. That's the part they need to improve. Downloading and uploading could be easier. They could create more readability of files. We can use other solutions in desktop versions and then we can connect those Azure files. However, if anyone is working maybe in Linux or other things, then the Azure Portal is best for them. The readability is easier.
I cannot recall coming across any missing features. I have experience in resolving various issues, however, troubleshooting can be sometimes very difficult as items are not very well documented. If the documentation could be improved then it would be very good. Right now, if you face a question, you have to guess how it works or you have to test your hypothesis. This would be great for newer, more basic users.
Azure Block Storage could use more capacity.
It should be easier to deploy.
We need to do versioning each and every time, that is, check-in and check-out. If a person forgets to do that, then there could be an issue in the replacement of that particular object in the storage. It will be great if it can be automatically managed.
In terms of improvement, it could use better integration with non-propriety tools. Azure pushes you down a data factory road for integrations but data factory isn't always a scalable, viable solution, particularly when using large numbers ofconnections to IoT solutions.
There is a folder inside the block storage. I am not able to delete the folder completely from the portal. I need to go delete the file inside and then I will be able to delete the folders. We don't delete the other tools from the folder directly. There's no folder deletion option in Azure Block Storage. It would be nice if they could update this.
One of the biggest challenges on Azure infrastructure is that sometimes we don't get what we want. Either we have to wait for it or contact support because something is not working for us. Most of the time it works fine. But sometimes, when we want something critical, apart from block storage — we want to increase the capacity of the VM or we want to increase the cores of the VM — we have to wait. We get a response from Microsoft from a North American location. We want to host everything within the UAE. But we are redirected through North America or Europe. That is a challenge with Azure. It's the system availability that is the difficult point. Also, billing is an issue. We have had a lot of problems when it comes to billing. We started working with minimal resources and we opted for pay-as-you-go mode. Microsoft needs to invest in some resources to improve the billing part. It's not about the money, but the visibility. I see a lot of invoices, but I don't see whether we have paid them or not. And if we have paid, I don't see when it was paid and through which credit card or account we paid. I see invoices, but I don't see the information related to those invoices.
Azure had the Blob version prior to Block. Azure also realized that Blob Storage did not really have any main space. So, that was a limitation of the Azure Blob product. But that is also the reason why they decided to develop the Gen2 version, which puts back all the right structure for the storage and capacity. So, it is good to explore Gen2 as another solution. I think most of the users would eventually move from Blob Storage to Gen2 only because of the space limitations of Blob. You also do not see the kind of folder search capability in Blob that you might expect. Like when you are in windows, you can just go to a particular folder and you can browse within that folder and you can go into subfolders and search there as well. That feature is kind of missing in the Blob product. So the inability to easily search subfolders and also the space limitation of Blob are really the biggest limitations of the product. What I would like to see in the next release — and I am not really sure if it is possible in cloud solutions — but it would be nice to provide a feature for being able to distribute large files.
Uploading of data could be improved. Currently, the only way you can upload your data is by using their storage explorer. You need to use a command line to do that. Then, maybe you can go online and you can use their explorer to check to see if it's there. If they had a third-party application to handle the upload, it would be better. Many people aren't familiar with command lines for uploading data. It makes it more difficult for many users. The pricing is a bit high.
I would like to see an improvement in the networking and cost transparency part. What I mean by this, is that we provide certain workflows for our customers and we usually calculate what is local, but even then it's not really upfront. We must know what the usage will be in DMs. So the price is different, and that is where provisioning and cost transparency comes in. We must know what it will cost in the OPEX work that we do. We do have some cost analysis, overdose cloud, but still, it's not always the right amount that's reflected for our customers for each application or each analysis that they want to do. The invoice type is therefore different for each client.
The biggest problem with Azure is the price. It's cost too high. Technical support needs improvement.