Senior Solution Architect - Presales with 51-200 employees
User
2021-08-02T17:05:17Z
Aug 2, 2021
Well, there are many things to consider, but I will start with scalability.
In HCI solutions scalability is achieved by adding nodes, while in dHCI (discrete HCI, hyper-converged solutions that use a SAN), you can expand the compute nodes or the storage. That means that dHCI is more flexible, and you will address your compute or storage needs in a tailored way.
The other thing to consider is availability.
HCI solutions base their availability in RAIN (Redundant Array of "inexpensive" Nodes). This means that you have more than one copy of your data located in different nodes. In case that you experience a failure in a node, your data is protected and accessible. Moreover, is extremely easy to set up a stretched cluster.
SAN-based architectures, usually include just one copy of your data, unless you use more than one storage system and a replication solution.
Another thing to consider is operations. HCI environments are easy to use, set up and scale. On the other hand, SAN-based solutions require more knowledge and maintenance efforts (Fabric OS's to update, HBAs, etc).
Search for a product comparison in Modular SAN (Storage Area Network)
Solutions Architect/Team Lead - Business Data and Data Protection at a tech consulting company with 501-1,000 employees
Real User
2020-06-02T19:09:14Z
Jun 2, 2020
Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:
- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles
If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.
There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.
HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.
3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.
In general, it's best to start with HCI and work to disqualify it rather than the other way around.
Senior Technical Enterprise Engineer - VMware at R A Consulting Services
Real User
2020-06-03T06:21:16Z
Jun 3, 2020
There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.
Solutions Architect Data Center Servers and Storage at Tecnologia informatica
Reseller
2020-06-02T17:36:34Z
Jun 2, 2020
There are so many variables to consider.
First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.
To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.
SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.
Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.
Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.
Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.
HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.
All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.
Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.
Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.
As mentioned earlier, new technologies don't see why not use HCI.
I think it's an important factor and when you have a reduced team, you end up opting for a fully integrated solution.
HCI is wonderful, and possible to work with scalability, redundancy, there are tools to provide agile backup.
The traditional structure makes many analysts more comfortable, but for small teams it ends up overloading.
I use both frameworks, for large volatile data volume I believe that pure investment in HCI comes at a high cost, as it adds more storage host.
Also talk about abandoning the SAN you already have, in my opinion and something very drastic, each product has its strengths, replication for storage is still my favorite, even though there are very good replication solutions in HCI.
It's worth analyzing the whole, the size of the structure, the technical team, the qualifications, what kind of application you want to work on, the financial investment is important, but it can be more expensive in the end.
I've seen companies connecting their SAN to HCI, not always for performance reasons, but because it already exists, or there are low-cost solutions, and space requirements.
But when everything is new, it is possible to buy the minimum (HCI), already in SAN and I need to pre-dimension the number of ports, capacity, processing, speed, which will be used in its growth journey, this can make the project more expensive.
Find out what your peers are saying about NetApp, Hewlett Packard Enterprise, IBM and others in Modular SAN (Storage Area Network). Updated: November 2024.
Works at a financial services firm with 10,001+ employees
Real User
2021-08-02T14:34:49Z
Aug 2, 2021
Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.
Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.
Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.
HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.
Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.
Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.
Country Head SSID & GM South at a import and exporter with 51-200 employees
Reseller
2020-06-06T11:12:26Z
Jun 6, 2020
If things are working in a traditional way already and not much growth is expected then SAN is suitable. however, if things are on the cloud journey or already virtualized then HCI suites more.
Compwire Tecnologia at a tech vendor with 51-200 employees
User
2020-06-03T10:56:22Z
Jun 3, 2020
There are two SAN (FC SAN and IP SAN), both use the SCSI v3 protocol:
- FC-SAN achieves a bandwidth of 16 and 32 Gbps.
- IP SAN achieves a bandwidth of 1, 10, and 25 Gbps.
SAN generally uses CI (Converged Infrastructure): “n” COMPUTE nodes, “n” NETWORK nodes, and “n” STORAGE nodes.
HCI (Hyper-Converged Infrastructure) uses only GbE network (1, 10, and 25 Gbps), through the SCSI V3 protocol. Each node is connected to an aggregate of nodes (Cluster – up to 64 Nodes) and have all 3 functions for each node (COMPUTE + NETWORK + STORAGE). These nodes are managed by a Hypervisor (VMware, Nutanix, ...).
If STORAGE capacity grows rapidly, HCI (Hyper-Converged Infrastructure) will not be the most suitable solution!
The two main problems are the NETWORK and the SCSI V3 protocol: high latency and limited by 25 Gbps!
The choice is more philosophical than deterministic, it depends on what you're going to do over this new infrastructure. All the answers are excellent and I have no all these aspects on my mind, but before choosing this or that what do you need SAN or HCI for? Who is going to implement and maintain the solution?
Business Development Manager at a tech services company with 501-1,000 employees
User
2020-06-03T06:42:39Z
Jun 3, 2020
There are many factors to be considered before taking any decisions.
What is the data to be stored on the device? Because depending upon that we have to decide whether to go for dedicated storage or HCI solution. E.g. for only file data HCI solution will not be correct fit. Also we have to take in to consideration cost of the device.
What is the scalability? If you require more scalability or expandability, then again cost factors comes in to picture. Because some HCI solution needs capacity based license. And solutions which does not require capacity based license, their basic cost is more. Also you have to consider per TB cost also. In HCI solutions, storage capacity is limited.
Performance - HCI solutions not able to deliver to specialized workloads where more performance is required.
Other than this many other factors needs to take in to consideration like backup solution, DR site.
Senior Account Manager with 5,001-10,000 employees
Real User
2020-06-02T11:52:05Z
Jun 2, 2020
In my opinión , key factors to consider between traditional SAN and HCI are:
. Existing infraestructure: If the busness already have legacy systmems that need to be reutilized in the new infraestructure , it can condition your decisión
. Performance: Even with the new technologies like SSDs and NVMe, the performance onf the SAN with FC or iSCI is significative better tan HCI, so if the business apps needs maximun performance then SAN is the choice.
. Scalability: HCI solutions evolve fast and now they scale better , but in any case SAN technology has provven good scalability over the years.
. Manageability: Consider the features for manage and monitor your infraestructure as well as the knowledge profile of your IT team , new tools and architectures needs new skills and new trainings .
Hyper-Converged Infrastructure refers to a system where numerous integrated technologies can be managed within a single system, through one main channel. Typically software-centric, the architecture tightly integrates storage, networking, and virtual machines.
Well, there are many things to consider, but I will start with scalability.
In HCI solutions scalability is achieved by adding nodes, while in dHCI (discrete HCI, hyper-converged solutions that use a SAN), you can expand the compute nodes or the storage. That means that dHCI is more flexible, and you will address your compute or storage needs in a tailored way.
The other thing to consider is availability.
HCI solutions base their availability in RAIN (Redundant Array of "inexpensive" Nodes). This means that you have more than one copy of your data located in different nodes. In case that you experience a failure in a node, your data is protected and accessible. Moreover, is extremely easy to set up a stretched cluster.
SAN-based architectures, usually include just one copy of your data, unless you use more than one storage system and a replication solution.
Another thing to consider is operations. HCI environments are easy to use, set up and scale. On the other hand, SAN-based solutions require more knowledge and maintenance efforts (Fabric OS's to update, HBAs, etc).
Whether to go 3 Tier (aka SAN) or HCI boils down to asking yourself what matters the most to you:
- Customization and tuning (SAN)
- Simplicity and ease of management (HCI)
- Single number to call support (HCI)
- Opex vs Capex
- Pay-as-you-grow (HCI)/scalability
- Budget cycles
If you are a company that only gets budget once every 4/5 years, and you can't get any capital expenditures for Storage/etc, pay-as-you-grow becomes less viable, and HCI is designed with that in mind. It doesn't rule out HCI, but it does reduce some of the value gained. Likewise, if you are on a budget cycle to replace storage and compute at different times, and have no means to repurpose them, HCI is a tougher sell to upper management. HCI requires you replace both at the same time, and sometimes budgets for capital don't work out.
There are also some workloads that will work better on a 3Tier solution vs HCI and vice versa. HCI works very well for anything but VMs with very large storage footprints. One of the key aspects of HCI performance is local reads and writes, a workload that is a single large VM will require essentially 2 full HCI nodes to run, and will require more storage than compute. Video workloads come to mind for this. Bodycams for police, surveillance cameras for businesses/schools, graphic editing. Those workloads can't reduce well, and are better suited for a SAN with very few features such as an HPE MSA.
HCI runs VDI exceptionally well, and nobody should ever do 3 Tier for VDI going forward. General server virtualization can realize the value of HCI, as it radically simplifies management.
3 Tier requires complex management and time, as you have to manage the storage, the storage fabric, and the hosts separately and with different toolsets. This also leads to support issues as you will frequently see the 3 vendor support teams blame each other. With HCI, you call a single number and they support everything. You can drastically reduce your opex with HCI by simplifiying support and management. If you're planning for growth up front, and cannot pay as you grow, 3 tier will probably be cheaper. HCI gives you the opportunity to not spend capital if you end up not meeting growth projections, and to grow past planned growth much easier as adding a node is much simpler than expanding storage/networking/compute independently.
In general, it's best to start with HCI and work to disqualify it rather than the other way around.
There are multiple factors that you shall be looking at while selecting one over the other.
1. Price- Price for HCI is cheaper if you are refreshing your complete infrastructure stack (Compute/Storage/network) however, if you are just buying individual components in the infrastructure such as compute or storage only, then 3-Tier infrastructure is cheaper.
2. Scalability-HCI is highly and easily scalable.
3. Support- On a 3 tier architecture, you have multiple vendors/departments to call/contact to get support on the solution. Whereas for HCI, you call/contact a single vendor addressing all your issues on the solution.
4. Infrastructure- For very small infrastructure, a 3Tier architecture based on iSCSI SAN can be a little cheaper. However, for a medium or large infrastructure HCI comes cheaper every time.
5. Workload type- If you are using VDI, I strongly recommend to use HCI. Similarly, for a passive secondary site, 3-tier could be OK. Please run all bench-marking tools to know what are your requirements.
I am sure HCI can do everything though.
There are so many variables to consider.
First of all, have in mind that tendency is not the rule, your needs should be the base of decision, so you don't have to choose HCI because it's the new kid on the block.
To start, think with your pocket, SAN is high cost if you are starting the infrastructure; cables, switches, and HBAs are the components to add to this structure that have a higher cost than traditional LAN components, On the other side, SAN requires more experimented experts to manage the connections and issues, but SAN has particular benefits sharing storage and servers functions like you can have on same SAN disk and backup and use special backup software and functionalities to move data between different storage components without direct impact on servers traffic.
SAN has some details to consider on cables like distance and speed, its critical the quality or purity to get the distance; the more distance, the less speed supported and transceiver cost can be the worst nightmare. But SAN have capabilities to connect storage boxes to hundreds of miles between them, LAN cables of HCI have 100 mts limit unless you consider a WAN to connect everything or repeaters or cascaded switches adding some risk element to this scenario.
Think about required capacities, do you need TB or PB?, Some dozens of TB can be fine on HCI, But if there are PBs you think on SAN, what about availability?, several and common nodes doing replication around the world but fulfilling the rules of latency can be considered with HCI, but, if you need the highest availability, replicating and high amount of data choose a SAN.
Speed, if it is a pain in the neck, LAN for HCI starts minimum at 10 Gb and can rise up to 100 Gb if you have the money, SAN has available just up to 32 Gb and your storage controller must be the same speed, this can drive the cost to the sky.
Scalability, HCI can have dozens of nodes replicating and adding capacity, performance, and availability around the world. With SAN storage you can have a limited number of replications between storage boxes, depending on manufactures normally you can have almost 4 copies of the same volume distributed around the world and scalability goes up to controllers limits its a scale-up model. HCI is a scale-out model to grow.
Functionalities, SAN storage can manage by hardware things like deduplication, compression, multiple kinds of traffic like files, blocks or objects, , on HCI just blocks and need extra hardware to accelerate some process like dedupe.
HCI is a way to share storage on LAN and have dependencies like the hypervisor and software or hardware accelerators, SAN is the way to share storage to servers, it is like a VIP lounge, so there are exclusive server visitors to share the buffet and can share the performance of hundreds of hard drives to support the most critical response times.
All depends of how you understand and use HCI:
If you see HCI as an integrated solution where storage is integrated into servers, and software-defined storage is used to create a shared pool of storage across compute nodes, performance will be the game changer of choosing for HCI or traditional SAN. The HCI solution of most vendors will be writing data 2 or 3 times for redundancy across compute nodes, and so where there is a performance impact on the applications due to the latency of the network between the nodes. Putting 25Gb networks, as some vendors recommend, is not always a solution since it is npt the bandwidth nut the latency of the network that defines the performance.
Low latency application requirements might push customers to traditional SAN in this case. If you use HCO for ease of management through a single pane of glass, I see many storage vendors delivering plugins to server and application software, eliminating the need of using the legacy SAN tools to create volumes and present them to the servers. Often it is possible to create a volume directly from within the hypervisor console and attach them to the hypervisor servers. So for this scenario, I don't see a reason choosing between the one or the other.
Today there is a vendor (HPE) that is combining traditional SAN in an HCI solution calling it dHCI. It gives you a HCI user experience, the independent scalability of storage and compute, and the low latency often required. After a time I expect other vendors will follow the same path delivering these kinds of solutions as well.
Maybe what I say becomes a little redundant.
As mentioned earlier, new technologies don't see why not use HCI.
I think it's an important factor and when you have a reduced team, you end up opting for a fully integrated solution.
HCI is wonderful, and possible to work with scalability, redundancy, there are tools to provide agile backup.
The traditional structure makes many analysts more comfortable, but for small teams it ends up overloading.
I use both frameworks, for large volatile data volume I believe that pure investment in HCI comes at a high cost, as it adds more storage host.
Also talk about abandoning the SAN you already have, in my opinion and something very drastic, each product has its strengths, replication for storage is still my favorite, even though there are very good replication solutions in HCI.
It's worth analyzing the whole, the size of the structure, the technical team, the qualifications, what kind of application you want to work on, the financial investment is important, but it can be more expensive in the end.
I've seen companies connecting their SAN to HCI, not always for performance reasons, but because it already exists, or there are low-cost solutions, and space requirements.
But when everything is new, it is possible to buy the minimum (HCI), already in SAN and I need to pre-dimension the number of ports, capacity, processing, speed, which will be used in its growth journey, this can make the project more expensive.
Business-wise, direct savings across the architecture, hardware, software, backup, and recovery, hyperconvergence can transform IT organizations from cost centers to frontline revenue drivers. A major issue in traditional IT architecture was that as complexity rises, the focus shifts from business problems to tech problems. The business’s focus should be on what IT can do for the bottom line, not what the bottom line can do for IT.
Capital expenditures (CAPEX): The one-time purchase and implementation expenses associated with the solution Operational expenditures (OPEX): The running costs of an IT solution – better known as the total cost of ownership (TCO) – that are incurred for managing, administering, and updating the existing IT infrastructure Considering the separate areas of cost reductions discussed above, organizations can evaluate the expense differentials between their traditional infrastructures and the HCI environment.
Hyperconvergence helps meet current and future needs, so it’s essential to calculate the TCO accurately. The TCO of a hyperconverged infrastructure includes annual maintenance fees for data centers and facilities, telecom services, hardware, software, cloud systems, and external vendors. Other costs include staff needed for deployment and maintenance, staff training and efforts to integrate with existing and legacy systems.
HCI overcomes the enormous wastage of resources and budgets common in the early phases of traditional infrastructure deployments because their scale dwarfs business needs at the time of purchase. HCI lends itself to incremental and granular scaling, allowing IT to add/remove resources as the business grows.
Scalability and agility are the main consideration factor to decide between SAN and HCI. SAN infra needs huge work involvement when attaining the end of support, end of life situation. Also, budgeting and procurement frequency plays a role.
Also, the limitation of HCI to be single datastore in VMware environment is a problem, when disk corruption or data corruption happens.
If things are working in a traditional way already and not much growth is expected then SAN is suitable. however, if things are on the cloud journey or already virtualized then HCI suites more.
There are two SAN (FC SAN and IP SAN), both use the SCSI v3 protocol:
- FC-SAN achieves a bandwidth of 16 and 32 Gbps.
- IP SAN achieves a bandwidth of 1, 10, and 25 Gbps.
SAN generally uses CI (Converged Infrastructure): “n” COMPUTE nodes, “n” NETWORK nodes, and “n” STORAGE nodes.
HCI (Hyper-Converged Infrastructure) uses only GbE network (1, 10, and 25 Gbps), through the SCSI V3 protocol. Each node is connected to an aggregate of nodes (Cluster – up to 64 Nodes) and have all 3 functions for each node (COMPUTE + NETWORK + STORAGE). These nodes are managed by a Hypervisor (VMware, Nutanix, ...).
If STORAGE capacity grows rapidly, HCI (Hyper-Converged Infrastructure) will not be the most suitable solution!
The two main problems are the NETWORK and the SCSI V3 protocol: high latency and limited by 25 Gbps!
The choice is more philosophical than deterministic, it depends on what you're going to do over this new infrastructure. All the answers are excellent and I have no all these aspects on my mind, but before choosing this or that what do you need SAN or HCI for? Who is going to implement and maintain the solution?
There are many factors to be considered before taking any decisions.
What is the data to be stored on the device? Because depending upon that we have to decide whether to go for dedicated storage or HCI solution. E.g. for only file data HCI solution will not be correct fit. Also we have to take in to consideration cost of the device.
What is the scalability? If you require more scalability or expandability, then again cost factors comes in to picture. Because some HCI solution needs capacity based license. And solutions which does not require capacity based license, their basic cost is more. Also you have to consider per TB cost also. In HCI solutions, storage capacity is limited.
Performance - HCI solutions not able to deliver to specialized workloads where more performance is required.
Other than this many other factors needs to take in to consideration like backup solution, DR site.
It's completely case to case basis decision.
In my opinión , key factors to consider between traditional SAN and HCI are:
. Existing infraestructure: If the busness already have legacy systmems that need to be reutilized in the new infraestructure , it can condition your decisión
. Performance: Even with the new technologies like SSDs and NVMe, the performance onf the SAN with FC or iSCI is significative better tan HCI, so if the business apps needs maximun performance then SAN is the choice.
. Scalability: HCI solutions evolve fast and now they scale better , but in any case SAN technology has provven good scalability over the years.
. Manageability: Consider the features for manage and monitor your infraestructure as well as the knowledge profile of your IT team , new tools and architectures needs new skills and new trainings .