Key factors include scalability, ease of management, performance, integration capabilities, cost-effectiveness, and security features.
Scalability
Ease of management
Performance
Integration capabilities
Cost-effectiveness
Security features
Scalability is crucial for growing businesses, allowing them to adapt seamlessly without significant reconfiguration. Systems that are easy to manage reduce administrative overhead, enabling IT teams to focus on strategic tasks. Performance is vital in ensuring efficient operations and minimal downtime, which contributes positively to productivity.
Integration capabilities with existing systems and third-party applications help streamline processes and improve cross-platform compatibility. Cost-effectiveness is important to maximize investment returns while ensuring the chosen solution meets all necessary requirements. Security features are imperative to protect sensitive data and maintain compliance with industry standards.
Executive Vice President of Sales and Marketing with 11-50 employees
Real User
2019-07-09T14:54:37Z
Jul 9, 2019
Cost metrics, Rob, Capex, and open savings and even a TCO should be accounted for.
1) Operational efficiency assumptions based on assessments. This should yield time to deploy, VM to admin ratios, device consolidation, and power usage.
2) My most important thing is in the Recovery Time Objective and how well does it sustain without data loss. Recovery Point Objective measures how far you can go back without loss and RTO is how long mission critical devices are brought back online.
Since you will find yourself managing VMs, you might consider a cost analysis there as well. (Remember you won't be managing devices any longer)
Your benefits in using an HCI is
1) A VM Centric Approach
2) A software-defined datacenter- ( less replacement, better utilization, pay as you go)
3) Data Protection
4) Lower costs
5) Centralized and even automated self-management tools.
For me an HCI solution should provide me:
- ease of management, 1 console does all, no experts needed, cloud Experience but with on-premise guarantees
- invisible IT, don't care about the underlying hardware, 1 stack
- built-in intelligence based on AI for monitoring and configuration
- guaranteed performance for any workloads, also when failures occur
- data efficiency with always-on dedupe and compression
- data protection including backup and restore
- scalability, ease of adding resources independent of each other (scale up & out)
- a single line of support
Senior Manager, APJ Product Marketing at SolarWinds
Real User
2019-07-09T03:49:20Z
Jul 9, 2019
While there is a long list features/functions that we can look at for HCI -> In my experience of creating HCI solutions and selling it to multiple customers, here are some of the key things I have experienced most customers boil it down to:
1) Shrink the data center:
This is one of the key "Customer Pitch" that all the big giants have for you, "We will help you reduce the carbon footprint with Hyperconverged Infrastructure". It will be good to understand how much reduction they are helping you with. Can 10 racks come down to two, less or more? With many reduction technologies included and Compute + Storage residing in those nodes, what I mentioned above is possible, especially if you are sitting on a legacy infrastructure.
2) Ease of running it:
The other point of running and buying HCI is "Set it and forget it". Not only should you look at how easy it is for you to set up and install the system, but how long does it take to provision new VMs/Storage, etc. It is great to probe your vendors around to find out what they do about QOS, centralized policy management, etc. Remember that most HCI companies portfolios differ at the software layer and some of the features I mentioned above are bundled in their code and work differently with different vendors.
3) Performance:
This could be an architecture level difference. In the race of shrinking the hardware footprint down, you could face performance glitch. Here is an example: When you switch on de-duplication and compression, how much effect does it have on the overall performance on CPU, and thereby affecting the VMs. Ask your vendors how they deal with it. I know some of them out there offload such operations to a separate accelerator card
4) Scaling up + Scaling out:
How easy it is to add nodes, both for compute and storage?
How long does it take while adding nodes and is there a disruption in service?
What technologies do the vendors use to create a multi-site cluster? Keep in mind if the cluster is created with remote sites too?
Can you add "Storage only" or "Compute only" nodes if needed?
All of the above have cost implications in a longer run
5) No finger pointing:
Remember point number two? Most of these HCI are based on "Other Vendors' hardware" wrapping it with their own HCI Software and making it behave in a specific way. If something goes wrong, is your vendor okay to take full accountability and not ask you to speak with a hardware vendor? It will be a good idea to look for a vendor with a bigger customer base (not just for HCI but compute and storage in general) - making them a single point of contact and more resources to help you with, in case anything goes wrong.
Development Support Engineer at a engineering company with 1-10 employees
Real User
2019-07-08T22:23:45Z
Jul 8, 2019
In my opinion, the most important criteria when assessing HCI solutions other than the obvious performance. How does that HCI solution scale? Or in other words, how does one add storage and compute resources to the solution. Without understanding how the solution scales one can easily request resources without understanding how and why the overall costs have ballooned. The costs can balloon not only because you're adding additional nodes to your HCI cluster for the additional storage and compute resources that were needed but also with additional compute nodes added to the cluster this requires additional licensing for whichever hypervisor the HCI solution depends upon. This is usually on a per-compute-node basis. For example, some HCI architecture allows admins to add only storage to the HCI cluster when additional storage is needed. Not requiring the purchase of any additional licensing from the hypervisor's perspective. On the other hand, some HCI architecture requires you to add a compute node with the additional storage you need. Even if you don't need the compute resources required to add that storage. That compute node will then need to be properly licensed as well. This type of architecture can and usually does force its consumers to spend more money than the circumstances initially dictated. So for me how the HCI solution scales is most important because it can ultimately determine how cost-effective the HCI solution really is.
HCI solutions have matured over time. While the swing in the global market is a yoyo between VxRail and Nutanix, there are quiet a few new vendors who have brought Hardware Agnostic solutions to the market. Management, ease of implementation was the key yesterday. Of late, I see a plethora of customers, who needs multi-cloud connectors available. Nutanix has taken a decent lead here, with the acquisition of Calm. Pricey though, a minimum pack with yearly subscription provides for 25VM's. VxRail from Dell EMC has a lot to catch up there, however with a free API connector to AWS, free for the first three TB, and then priced per TB of movement between private and public cloud. DISA STIG compliance is yet another point customers are interested to see in the solutions. Nutanix claims their code is built to comply with these rigorous standards for secure Virtualization layer with AFS, whereas Dell EMC offers scripts that have been pretested, to ensure the environment can comply to the standards.
Backup companies are vying to get their products certified. Wonder what Nutanix would have for the currently certified solutions, post their acquisition of "Mine". It still has miles to go.
Well, I think that the #1 criteria is that the HCI solution must be very resilient. High Availability, Business Continuity, Rapid Deploy. I think that an HCI solution must be able to manage all the infrastructure (network, disks, processors, ram, apps, etc), and easily scalable. I think that TCO should be low.
System Administrator for virtual platforms at a healthcare company with 1,001-5,000 employees
Real User
2019-07-08T07:46:10Z
Jul 8, 2019
Maintenance tasks should be simple with no downtime or no degradation of availability and resources. This is a must for long term hyperconverged infrastructure. The cost-benefit should not grow exponentially, or else the traditional solutions will win the budget planning.
Services Principal at a tech vendor with 1,001-5,000 employees
User
2021-11-01T18:57:37Z
Nov 1, 2021
Like all requirements, the answer is often "it depends". I personally feel that the HCI solution's key benefit is SUPPOSED to be the simplicity to build and manage (more MANAGE than Build typically)... so I will leave manageability as "table stakes" for all solutions.
One of the original limitations for HCI was the scalability of Storage independent to compute and vice-versa. These have been addressed through different methods over the years, but be aware of investment size for updating/growing in these situations.
Find out your inflection points that will require investment, figure out those investment budgets, and walk-in with your eyes open. Lower operational expenses are not free... they typically are used as additional capital expenses. If you are not large or flexible enough to save budget with fewer people operating the systems, then you will INCREASE budget spend with HCI... though you will just have more time to do more valuable work.
Most of my peers have consolidated the key points, and I wish not to delve on the same again. That said, I wish to re-state that the HCI market and most have solutions that are, there are very mature. The buzz words have taken a paradigm shift albeit. The "Scale-out" approach is de facto, and so are the sensitivities towards reducing the learning curve, high resiliency etc. Focus on DevOps has taken the game today, with most vendors talking about integration with "Microservices".
In most cases that I have personally dealt with, customers did not or were not exposed to having their "Work Load" estimation in place. I would suggest that you perform the mandatory exercise of understanding your work load. This plays a key role as one can begin with a small two node cluster and then scale out. There are quite a few tools out there, but personally I like "Live Optics". Run the tool against your existing servers, virtual machines as well, it can open your eyes in determining how many nodes to start with. The tool also gives you an insight in planning your budget for additional nodes, and quantify your reduction on "Carbon Footprint".
Should I build one on-premise, or look at IaaS as a solution is another aspect you should spend time. Each have their pros and cons, and how you read them depends. If you need "Multitenancy", IaaS, is what you should be leaning towards.
Team Leader of technical team with 11-50 employees
User
2019-07-15T10:14:53Z
Jul 15, 2019
When our company was hired to help a customer to migrate to a new computing infrastructure, the hyper-converged solution did come to mind to all participants. We did evaluate three solutions, the one that came out was the provider who could provide an integrated solution in terms of hardware and software but the software update support was also an important point. The chosen solution did address the customer needs, the automatic tested updates was a key point, the administration interface with statistics and reports was also attractive to the client. The time to recovery in case of a major failure was the final point that helped choose the solution.
Specialist Design and Solutions at orben comunicaciones
Real User
2019-07-11T16:32:23Z
Jul 11, 2019
I think the most important aspects are the following:
First of all, time of response of the solution is a metric that must be measurable and usable for the client to make decisions regarding different HCI market options.
Another one is the simplicity of the solution deployment in the real world for the clients, I mean, for the client it is a priority to acquire solutions not more complex problems that make daily operation more difficult.
Finally, to show a solid solution, that means to have a future, a roadmap that makes the client feel comfortable and secure, trusting that the decision taken about the HCI acquired was the right one. This confidence is offered by a solid and capable HCI vendor who understand the client needs.
The most important criteria would be an integrated set of platforms that enable secure composition of compute, storage, networking, and administration resources to seamlessly accommodate an application.
Manager ICT Solution at a tech services company with 11-50 employees
Reseller
2019-07-09T04:33:16Z
Jul 9, 2019
First of all, a business should determine based on requirement whether they should opt for HCI or Traditional. Both roadmaps have their advantages and drawbacks.
#1 criteria for HCI from my view is which eases the objective of the project. Various projects embark in different areas. Some are based on performance intensive workloads and some are the capacity intensive workload. All vendors have their own ways of calculating overheads. While evaluating also consider licensing model and the features you add in a bundle. Every license feature added has a performance penalty on infrastructure sizing so not necessary that one vendor spec is equal to 2nd vendor. More licensing will impact on renewal cost as well.
Senior Vice President & Chief Information Officer at a hospitality company with 10,001+ employees
Real User
2019-07-08T20:58:29Z
Jul 8, 2019
-Reliability and product reputation
-Simplicity, scalable and fit for purpose
-Ability to setup a node for remote failover
-Flexibility and supportability
-Ease of Migration and Implementation
-Greatest chance to achieve established success criteria
Principal Product Manager at a computer software company
Real User
2019-07-08T20:26:01Z
Jul 8, 2019
The most important aspect to look for is-does it solve your problems? If you are looking to simplify sizing, purchasing and having one vendor for sales and support, go for a pre-built HCI appliance. If you are looking to move away from SANs or simply augment them on an as-needed basis a'la Toyota's JIT methodology, then you should for either an HCI appliance or purchase the hardware, hypervisor, and SDS solution and build it yourself. There is no "best" or "perfect" HCI solution - only what's best for you. Speaking of HCI, I do work for DataCore Software and can make you a seemingly biased (but accurate and honest) recommendation.
Systems Architect, Principal at CACI International Inc.
Consultant
2019-07-08T18:55:42Z
Jul 8, 2019
Organizational Fit. Does the Vendor's HCI solution meet your organization's tactical and strategic technical environment/requirements as well as budgetary constraints and goals?
Typically the definition is sort of marketing/sales term (IMHO) that describe a "box" - piece of hardware that has CPU, RAM, storage & networking and oftentimes even bundles the hypervisor as a single package. Everything all in one chassis. Sort of an extension of the blade server concept extended to virtualization.
In my environment, I have separate servers to construct a cluster (converging the CPU & RAM resources). And on top of that, I've installed a
virtual SAN (which converges the storage resources). I plan to implement VMware's distributed switch architecture on the cluster and that will also
commoditize the network resources.
So why did I bother telling you all that verbiage? Well to explain a reason why I haven't embraced HCI. For an SMB HCI just doesn't make sense. Cost
wise it is extremely prohibitive. Much more than the sum of the parts I'm afraid. I know some vendors were making attempts to enter the SMB market
but I haven't honestly kept up with that.
IT System Manager at a financial services firm with 1,001-5,000 employees
Real User
2019-07-08T17:16:51Z
Jul 8, 2019
The most important aspect is performance. Therefore, you should be careful when shifting for example from traditional SAN to a hyperconverged solution.
The vendor should guarantee that the solution will not slow down in the long term when the workload is increased.
Well, the purpose of an HCI solution is to replace the three-tier architecture so that the footprint can be reduced. My suggestion would be to first finalize what are your organizational need, once you finalize that you should discuss it with your IT partner and evaluate a solution accordingly. There is no number 1 or number 2 criteria here. Every solution has its own pros and cons.
Reliability is the number 1 criteria in an HCI solution. I realize that speed is also important depending on the size of files that you may be dealing with and fortunately DataCore also addresses this issue. You get both with this product.
EASE of Use; test it for your requirements; understand where your IT will be in one, three, and five years and beyond. Also, what are the outbound costs? And with those costs, are there lock-in's. How is support? Is support old school with one number to dial followed by the same old school support process? Or is it support that has been modernized for HCI? Is security built in? Or is it a bolt-on? How does micro-segmentation work? How quickly can this be installed and in production? How many references are there? Don't take the vendor's word or their references; find out who is doing this locally and talk to as many of those as you can. And what is the companies vision? Are they based on other companies' technology or can they work separately? How quickly can DR be deployed? Is it easy to failover and fail back? How is it backed up? Can the backup be modernized? Or does it require the same old backups based on 3-tier? And, don't forget the first thing I wrote; Ease of use trumps all. How intuitive is the solution? Can it be managed by generalists? Or only experts? Is training included? Is migration services included?
Vendor Development Executive with 201-500 employees
Real User
2019-07-08T14:07:24Z
Jul 8, 2019
When I think about the discussions with our clients, there is one thing that comes up very regularly. Is the HCI solution able to replace the whole 3-tier-architecture? I am talking about conversations with customers that have IT seats between 500 and 5000 in most cases. They want to have easier management of their DC and therefore they need one infrastructure and no silos.
You should consider how the solution suits your environment requirements, scalability, hyperconverged systems support, integration with your current and planned infrastructure, what kind of training, support and maintenance is required, building blocks variation offered, availability, DR & Backup capabilities, easy administration, and cost-effectiveness.
When looking at an HCI solution I say the most important thing is to determine what services you want in the solution. Will you need containers, micro-segmentation, automation, movement to and from Clouds with APIs etc. Most HCI systems have a good file system and manage the basics you have in the bottom layer. Not all of these HCI systems offer the more advanced aspects and functionality - they are more or less a new way to do the old 3-layer design. This gives limited value if you ask me. Another thing to look for is how complex and time consuming the extra values are to put in use. The most important differences between HCI solutions you will find within these areas. SimpliVity is a good SMB system offering good base solutions, wherein the other end not by fare as advanced as with Nutanix and vSAN based solutions. Nutanix will again have a less complex way of operating its systems compared with vSAN and VMware who probably may do even more advanced tasks, but at a price, as you have a 100% lock in on one technology. So again, what are your needs!?
COO at a comms service provider with 5,001-10,000 employees
Reseller
2019-07-08T13:07:44Z
Jul 8, 2019
Think of all the stuff you love about the public cloud on the private cloud - one click deployment, pay as you grow, agility, and security.
You should find out your compelling typical uses, such as cognitive applications, databases & other data-intensive applications, or you have more on ROBO, VDI, and then evaluate the HCI layer that suits your requirements.
CEO & Majority Shareholder at Comdivision Consulting GmbH
Real User
Top 20
2019-07-08T09:41:55Z
Jul 8, 2019
HCI is for us to move from Legacy infrastructure to a more software-defined approach. One of the criteria is vendor support HW + SW must give me enough freedom.
Second, data availability, vendor integration, and feature set is important.
The third is vendor support availability and skill set.
While I must say we are happy with the first and second with VMware. We had issues with support skill sets in the past.
Director & CTO at a tech services company with 11-50 employees
User
2019-07-08T08:44:56Z
Jul 8, 2019
We had been looking for all of the following points:
Very resilient, High Availability, Business Continuity, Rapid Deploy.
Maintenance tasks should be simple with no downtime or no degradation of availability and resources, vendor locking or open-source with support.
All of this at a cost we could afford and your client can afford if you refer based on your setup.
So after a lot of exploring with all features and cost in mind, we went for an open-source solution try out with initially with GlusterFS, than switched to Ceph storage system with KVM & LXC on Centos 7. We later switched to Proxmox VE. It is after all Ceph, KVM, LXC but with all the major benefits and working on our existing commodity hardware. It has now been running for more than four years all going good and expanding.
For SMB it's all about finding a solution that works at the cost you can afford. Imost cases SMB are not pushing today's hardware to the performance limit.
Network Administrator at a transportation company with 51-200 employees
Real User
2018-09-04T01:05:19Z
Sep 4, 2018
- Cost
- If the solution or product that you will choose is fit to your company needs
- consider also the scalability and reliability of the solution
- if the solution will benefit the company
With the lockdown lifted, there has been a flurry of activities, by the "Good, Bad and Ugly". The baddies are busy, to scan every network, to find if there are any, out there who have left their backyards opened. Quite a few have, owing to shortcuts, deployed to provide access to their users with WFH.
An interesting awakening, we see is the requirement of deploying East-West firewalls. OEMs have either come up with their own brew ( NXT with VMWare) or, an integration with known market brands (Nutanix with F5). Hi-level Architecture, functionality and Deployment ease are not very different and it's hard to say one given approach weighs over the other. F5 solution can work with either a vSan or a VxRail based scale-out approach, NXT could be a good choice if deploying VMWare over AOS, which is quite rare.
This is yet another thought one may want to have as a checklist when evaluating an HCI solution. I'm waiting to see how the new players such as Quantum who bought over a niche Indian Startup to fast-track into this space workout.
Hyper-Converged Infrastructure refers to a system where numerous integrated technologies can be managed within a single system, through one main channel. Typically software-centric, the architecture tightly integrates storage, networking, and virtual machines.
Key factors include scalability, ease of management, performance, integration capabilities, cost-effectiveness, and security features.
Scalability is crucial for growing businesses, allowing them to adapt seamlessly without significant reconfiguration. Systems that are easy to manage reduce administrative overhead, enabling IT teams to focus on strategic tasks. Performance is vital in ensuring efficient operations and minimal downtime, which contributes positively to productivity.
Integration capabilities with existing systems and third-party applications help streamline processes and improve cross-platform compatibility. Cost-effectiveness is important to maximize investment returns while ensuring the chosen solution meets all necessary requirements. Security features are imperative to protect sensitive data and maintain compliance with industry standards.
Cost metrics, Rob, Capex, and open savings and even a TCO should be accounted for.
1) Operational efficiency assumptions based on assessments. This should yield time to deploy, VM to admin ratios, device consolidation, and power usage.
2) My most important thing is in the Recovery Time Objective and how well does it sustain without data loss. Recovery Point Objective measures how far you can go back without loss and RTO is how long mission critical devices are brought back online.
Since you will find yourself managing VMs, you might consider a cost analysis there as well. (Remember you won't be managing devices any longer)
Your benefits in using an HCI is
1) A VM Centric Approach
2) A software-defined datacenter- ( less replacement, better utilization, pay as you go)
3) Data Protection
4) Lower costs
5) Centralized and even automated self-management tools.
For me an HCI solution should provide me:
- ease of management, 1 console does all, no experts needed, cloud Experience but with on-premise guarantees
- invisible IT, don't care about the underlying hardware, 1 stack
- built-in intelligence based on AI for monitoring and configuration
- guaranteed performance for any workloads, also when failures occur
- data efficiency with always-on dedupe and compression
- data protection including backup and restore
- scalability, ease of adding resources independent of each other (scale up & out)
- a single line of support
While there is a long list features/functions that we can look at for HCI -> In my experience of creating HCI solutions and selling it to multiple customers, here are some of the key things I have experienced most customers boil it down to:
1) Shrink the data center:
This is one of the key "Customer Pitch" that all the big giants have for you, "We will help you reduce the carbon footprint with Hyperconverged Infrastructure". It will be good to understand how much reduction they are helping you with. Can 10 racks come down to two, less or more? With many reduction technologies included and Compute + Storage residing in those nodes, what I mentioned above is possible, especially if you are sitting on a legacy infrastructure.
2) Ease of running it:
The other point of running and buying HCI is "Set it and forget it". Not only should you look at how easy it is for you to set up and install the system, but how long does it take to provision new VMs/Storage, etc. It is great to probe your vendors around to find out what they do about QOS, centralized policy management, etc. Remember that most HCI companies portfolios differ at the software layer and some of the features I mentioned above are bundled in their code and work differently with different vendors.
3) Performance:
This could be an architecture level difference. In the race of shrinking the hardware footprint down, you could face performance glitch. Here is an example: When you switch on de-duplication and compression, how much effect does it have on the overall performance on CPU, and thereby affecting the VMs. Ask your vendors how they deal with it. I know some of them out there offload such operations to a separate accelerator card
4) Scaling up + Scaling out:
How easy it is to add nodes, both for compute and storage?
How long does it take while adding nodes and is there a disruption in service?
What technologies do the vendors use to create a multi-site cluster? Keep in mind if the cluster is created with remote sites too?
Can you add "Storage only" or "Compute only" nodes if needed?
All of the above have cost implications in a longer run
5) No finger pointing:
Remember point number two? Most of these HCI are based on "Other Vendors' hardware" wrapping it with their own HCI Software and making it behave in a specific way. If something goes wrong, is your vendor okay to take full accountability and not ask you to speak with a hardware vendor? It will be a good idea to look for a vendor with a bigger customer base (not just for HCI but compute and storage in general) - making them a single point of contact and more resources to help you with, in case anything goes wrong.
In my opinion, the most important criteria when assessing HCI solutions other than the obvious performance. How does that HCI solution scale? Or in other words, how does one add storage and compute resources to the solution. Without understanding how the solution scales one can easily request resources without understanding how and why the overall costs have ballooned. The costs can balloon not only because you're adding additional nodes to your HCI cluster for the additional storage and compute resources that were needed but also with additional compute nodes added to the cluster this requires additional licensing for whichever hypervisor the HCI solution depends upon. This is usually on a per-compute-node basis. For example, some HCI architecture allows admins to add only storage to the HCI cluster when additional storage is needed. Not requiring the purchase of any additional licensing from the hypervisor's perspective. On the other hand, some HCI architecture requires you to add a compute node with the additional storage you need. Even if you don't need the compute resources required to add that storage. That compute node will then need to be properly licensed as well. This type of architecture can and usually does force its consumers to spend more money than the circumstances initially dictated. So for me how the HCI solution scales is most important because it can ultimately determine how cost-effective the HCI solution really is.
1)Easy to operate or not
2)Cost
3)Scaleable
Absolutely the important aspects are:
1- Simplification, simple to implement, simple to manage and simple to use.
2- Reliability; There is always more reliability compared with a traditional solution.
For these two items when you see the cost, or better compare the TCO to a hyper-converged solution is always better.
HCI solutions have matured over time. While the swing in the global market is a yoyo between VxRail and Nutanix, there are quiet a few new vendors who have brought Hardware Agnostic solutions to the market. Management, ease of implementation was the key yesterday. Of late, I see a plethora of customers, who needs multi-cloud connectors available. Nutanix has taken a decent lead here, with the acquisition of Calm. Pricey though, a minimum pack with yearly subscription provides for 25VM's. VxRail from Dell EMC has a lot to catch up there, however with a free API connector to AWS, free for the first three TB, and then priced per TB of movement between private and public cloud. DISA STIG compliance is yet another point customers are interested to see in the solutions. Nutanix claims their code is built to comply with these rigorous standards for secure Virtualization layer with AFS, whereas Dell EMC offers scripts that have been pretested, to ensure the environment can comply to the standards.
Backup companies are vying to get their products certified. Wonder what Nutanix would have for the currently certified solutions, post their acquisition of "Mine". It still has miles to go.
Data protection is my primary concern, backup restore is a must have feature.
Well, I think that the #1 criteria is that the HCI solution must be very resilient. High Availability, Business Continuity, Rapid Deploy. I think that an HCI solution must be able to manage all the infrastructure (network, disks, processors, ram, apps, etc), and easily scalable. I think that TCO should be low.
Maintenance tasks should be simple with no downtime or no degradation of availability and resources. This is a must for long term hyperconverged infrastructure. The cost-benefit should not grow exponentially, or else the traditional solutions will win the budget planning.
Like all requirements, the answer is often "it depends". I personally feel that the HCI solution's key benefit is SUPPOSED to be the simplicity to build and manage (more MANAGE than Build typically)... so I will leave manageability as "table stakes" for all solutions.
One of the original limitations for HCI was the scalability of Storage independent to compute and vice-versa. These have been addressed through different methods over the years, but be aware of investment size for updating/growing in these situations.
Find out your inflection points that will require investment, figure out those investment budgets, and walk-in with your eyes open. Lower operational expenses are not free... they typically are used as additional capital expenses. If you are not large or flexible enough to save budget with fewer people operating the systems, then you will INCREASE budget spend with HCI... though you will just have more time to do more valuable work.
My $0.02.
Most of my peers have consolidated the key points, and I wish not to delve on the same again. That said, I wish to re-state that the HCI market and most have solutions that are, there are very mature. The buzz words have taken a paradigm shift albeit. The "Scale-out" approach is de facto, and so are the sensitivities towards reducing the learning curve, high resiliency etc. Focus on DevOps has taken the game today, with most vendors talking about integration with "Microservices".
In most cases that I have personally dealt with, customers did not or were not exposed to having their "Work Load" estimation in place. I would suggest that you perform the mandatory exercise of understanding your work load. This plays a key role as one can begin with a small two node cluster and then scale out. There are quite a few tools out there, but personally I like "Live Optics". Run the tool against your existing servers, virtual machines as well, it can open your eyes in determining how many nodes to start with. The tool also gives you an insight in planning your budget for additional nodes, and quantify your reduction on "Carbon Footprint".
Should I build one on-premise, or look at IaaS as a solution is another aspect you should spend time. Each have their pros and cons, and how you read them depends. If you need "Multitenancy", IaaS, is what you should be leaning towards.
When our company was hired to help a customer to migrate to a new computing infrastructure, the hyper-converged solution did come to mind to all participants. We did evaluate three solutions, the one that came out was the provider who could provide an integrated solution in terms of hardware and software but the software update support was also an important point. The chosen solution did address the customer needs, the automatic tested updates was a key point, the administration interface with statistics and reports was also attractive to the client. The time to recovery in case of a major failure was the final point that helped choose the solution.
I think the most important aspects are the following:
First of all, time of response of the solution is a metric that must be measurable and usable for the client to make decisions regarding different HCI market options.
Another one is the simplicity of the solution deployment in the real world for the clients, I mean, for the client it is a priority to acquire solutions not more complex problems that make daily operation more difficult.
Finally, to show a solid solution, that means to have a future, a roadmap that makes the client feel comfortable and secure, trusting that the decision taken about the HCI acquired was the right one. This confidence is offered by a solid and capable HCI vendor who understand the client needs.
The most important criteria would be an integrated set of platforms that enable secure composition of compute, storage, networking, and administration resources to seamlessly accommodate an application.
The #1 decision is the technical management of the infrastructure, in order to assure the minimum human resources to allocate to it.
First of all, a business should determine based on requirement whether they should opt for HCI or Traditional. Both roadmaps have their advantages and drawbacks.
#1 criteria for HCI from my view is which eases the objective of the project. Various projects embark in different areas. Some are based on performance intensive workloads and some are the capacity intensive workload. All vendors have their own ways of calculating overheads. While evaluating also consider licensing model and the features you add in a bundle. Every license feature added has a performance penalty on infrastructure sizing so not necessary that one vendor spec is equal to 2nd vendor. More licensing will impact on renewal cost as well.
-Reliability and product reputation
-Simplicity, scalable and fit for purpose
-Ability to setup a node for remote failover
-Flexibility and supportability
-Ease of Migration and Implementation
-Greatest chance to achieve established success criteria
(1) Meets requirements
(2) Costs (setup and maintenance)
(3) Ease of use
The most important aspect to look for is-does it solve your problems? If you are looking to simplify sizing, purchasing and having one vendor for sales and support, go for a pre-built HCI appliance. If you are looking to move away from SANs or simply augment them on an as-needed basis a'la Toyota's JIT methodology, then you should for either an HCI appliance or purchase the hardware, hypervisor, and SDS solution and build it yourself. There is no "best" or "perfect" HCI solution - only what's best for you. Speaking of HCI, I do work for DataCore Software and can make you a seemingly biased (but accurate and honest) recommendation.
Organizational Fit. Does the Vendor's HCI solution meet your organization's tactical and strategic technical environment/requirements as well as budgetary constraints and goals?
Typically the definition is sort of marketing/sales term (IMHO) that describe a "box" - piece of hardware that has CPU, RAM, storage & networking and oftentimes even bundles the hypervisor as a single package. Everything all in one chassis. Sort of an extension of the blade server concept extended to virtualization.
In my environment, I have separate servers to construct a cluster (converging the CPU & RAM resources). And on top of that, I've installed a
virtual SAN (which converges the storage resources). I plan to implement VMware's distributed switch architecture on the cluster and that will also
commoditize the network resources.
So why did I bother telling you all that verbiage? Well to explain a reason why I haven't embraced HCI. For an SMB HCI just doesn't make sense. Cost
wise it is extremely prohibitive. Much more than the sum of the parts I'm afraid. I know some vendors were making attempts to enter the SMB market
but I haven't honestly kept up with that.
The most important aspect is performance. Therefore, you should be careful when shifting for example from traditional SAN to a hyperconverged solution.
The vendor should guarantee that the solution will not slow down in the long term when the workload is increased.
Well, the purpose of an HCI solution is to replace the three-tier architecture so that the footprint can be reduced. My suggestion would be to first finalize what are your organizational need, once you finalize that you should discuss it with your IT partner and evaluate a solution accordingly. There is no number 1 or number 2 criteria here. Every solution has its own pros and cons.
Reliability is the number 1 criteria in an HCI solution. I realize that speed is also important depending on the size of files that you may be dealing with and fortunately DataCore also addresses this issue. You get both with this product.
EASE of Use; test it for your requirements; understand where your IT will be in one, three, and five years and beyond. Also, what are the outbound costs? And with those costs, are there lock-in's. How is support? Is support old school with one number to dial followed by the same old school support process? Or is it support that has been modernized for HCI? Is security built in? Or is it a bolt-on? How does micro-segmentation work? How quickly can this be installed and in production? How many references are there? Don't take the vendor's word or their references; find out who is doing this locally and talk to as many of those as you can. And what is the companies vision? Are they based on other companies' technology or can they work separately? How quickly can DR be deployed? Is it easy to failover and fail back? How is it backed up? Can the backup be modernized? Or does it require the same old backups based on 3-tier? And, don't forget the first thing I wrote; Ease of use trumps all. How intuitive is the solution? Can it be managed by generalists? Or only experts? Is training included? Is migration services included?
When I think about the discussions with our clients, there is one thing that comes up very regularly. Is the HCI solution able to replace the whole 3-tier-architecture? I am talking about conversations with customers that have IT seats between 500 and 5000 in most cases. They want to have easier management of their DC and therefore they need one infrastructure and no silos.
You should consider how the solution suits your environment requirements, scalability, hyperconverged systems support, integration with your current and planned infrastructure, what kind of training, support and maintenance is required, building blocks variation offered, availability, DR & Backup capabilities, easy administration, and cost-effectiveness.
When looking at an HCI solution I say the most important thing is to determine what services you want in the solution. Will you need containers, micro-segmentation, automation, movement to and from Clouds with APIs etc. Most HCI systems have a good file system and manage the basics you have in the bottom layer. Not all of these HCI systems offer the more advanced aspects and functionality - they are more or less a new way to do the old 3-layer design. This gives limited value if you ask me. Another thing to look for is how complex and time consuming the extra values are to put in use. The most important differences between HCI solutions you will find within these areas. SimpliVity is a good SMB system offering good base solutions, wherein the other end not by fare as advanced as with Nutanix and vSAN based solutions. Nutanix will again have a less complex way of operating its systems compared with vSAN and VMware who probably may do even more advanced tasks, but at a price, as you have a 100% lock in on one technology. So again, what are your needs!?
Think of all the stuff you love about the public cloud on the private cloud - one click deployment, pay as you grow, agility, and security.
You should find out your compelling typical uses, such as cognitive applications, databases & other data-intensive applications, or you have more on ROBO, VDI, and then evaluate the HCI layer that suits your requirements.
HCI is for us to move from Legacy infrastructure to a more software-defined approach. One of the criteria is vendor support HW + SW must give me enough freedom.
Second, data availability, vendor integration, and feature set is important.
The third is vendor support availability and skill set.
While I must say we are happy with the first and second with VMware. We had issues with support skill sets in the past.
We had been looking for all of the following points:
Very resilient, High Availability, Business Continuity, Rapid Deploy.
Maintenance tasks should be simple with no downtime or no degradation of availability and resources, vendor locking or open-source with support.
All of this at a cost we could afford and your client can afford if you refer based on your setup.
So after a lot of exploring with all features and cost in mind, we went for an open-source solution try out with initially with GlusterFS, than switched to Ceph storage system with KVM & LXC on Centos 7. We later switched to Proxmox VE. It is after all Ceph, KVM, LXC but with all the major benefits and working on our existing commodity hardware. It has now been running for more than four years all going good and expanding.
For SMB it's all about finding a solution that works at the cost you can afford. Imost cases SMB are not pushing today's hardware to the performance limit.
- Cost
- If the solution or product that you will choose is fit to your company needs
- consider also the scalability and reliability of the solution
- if the solution will benefit the company
With the lockdown lifted, there has been a flurry of activities, by the "Good, Bad and Ugly". The baddies are busy, to scan every network, to find if there are any, out there who have left their backyards opened. Quite a few have, owing to shortcuts, deployed to provide access to their users with WFH.
An interesting awakening, we see is the requirement of deploying East-West firewalls. OEMs have either come up with their own brew ( NXT with VMWare) or, an integration with known market brands (Nutanix with F5). Hi-level Architecture, functionality and Deployment ease are not very different and it's hard to say one given approach weighs over the other. F5 solution can work with either a vSan or a VxRail based scale-out approach, NXT could be a good choice if deploying VMWare over AOS, which is quite rare.
This is yet another thought one may want to have as a checklist when evaluating an HCI solution. I'm waiting to see how the new players such as Quantum who bought over a niche Indian Startup to fast-track into this space workout.
TCO, then Capability/Performance
Thank you all for the answers!
Hardware and Hypervisor Flexibility.