Hyper-converged is typically an "all in one box/rack" solution. It consists of compute, storage & network resources all tied together physically (and through software).
Hyper-converged for a pro - is a complete solution. You don't have to architect it. All you have to know is how much "power" you need (what you want to do with it). While with converged infrastructure (which can still be 'software defined') you have to match and configure the components to work together.
More often then not converged infrastructure is cheaper. You might already have the storage and networking resources, for example. And manufacturers put a premium on packaging the solution together.
Search for a product comparison in Converged Infrastructure
Technical Solutions Architect at Denali Advanced Integration
Vendor
2021-12-14T17:02:19Z
Dec 14, 2021
The key differences are scale, complexity and ease of use/management.
Converged systems are more complex, assembled from multiple vendors using different management tools which allow systems to scale tremendously (Hundreds of servers and Multiple Peta-Bytes of Storage.
Hyper-Converged systems are smaller, wrapped in a tight, easy to deploy shell that guides the user through system expansion, up to the nodal limits of ~32 servers. Hyper-Converged systems rely on storage integrated into rack mount systems. To assure stability, Hyper-Converged user choices are limited on storage and compute options. These options have been well vetted and are fully supported by the HCI vendor.
Technical Solutions Architect at Denali Advanced Integration
Vendor
Dec 16, 2021
@Steffen Hornung Note that in the above document, Nutanix claims a limit of 16 nodes supported under AWS. Supported node scalability is determined by what the vendor or the underlying software maintains as limits. For example, as you note above, VMWare does not support a VSAN cluster greater than 64 nodes. HPE's SImplivity, Dell's VxRail (Based on VMWare) and Cisco's Hyperflex all have 32 node limitations.
As for Enterprise class applications beyond 62TB I see major Epic installations at large, integrated Health Care Organizations and SAP Hana for Major Retailers with Global Supply Chains. That said - under the pre-COVID cost curve, creating compartmentalized applications that are regional in nature with DR/BC sites makes a lot of sense. We just saw Ransomware attacks against cloud based Kronos Payroll Systems, killing them just prior to the Christmas Holidays, throwing customers a headache in their operational models at a time when they already had enough issues to deal with - and did not need to face another major Cyber threat.
From a strategic perspective of application design, it makes sense to compartmentalize. The notion of running your entire business on a single HCI infrastructure is not a best practice and should not be encouraged for the reasons articulated above. Lose the King, lose the game. Best to have a lot of pawns as we deal with the current hybrid cloud and cyber space.
Audit Conseil: Sauvegarde at a tech services company with 201-500 employees
User
2020-08-12T12:07:02Z
Aug 12, 2020
Hyperconverged is a system cluster of at minimum 3 nodes. The system mirrors datas between nodes and runs virtual machines.
Converged systems is anything between the classic server and hyperconverged platform. This converged concept was useful in waiting for hyperconverged development and should disappear in a near future.
converged infrastructure still incorporates hardware, running the technology natively on hardware. On the other hand, hype convergence is fully software-defined and completely integrated
Oh, you cant get rid of hardware in any way. (damn you Apple for auto-correcting english back to german).
But it is true that hci is a software defined approach which has the advantage of delivering new features without new hardware.
Another thing that destinguishes hyperconerged solutions from converged ones is the scale-out nature: simply add more nodes to the system to support new workloads without losing performance because you add all types at once (compute, storage and networking).
Find out what your peers are saying about Dell Technologies, Hewlett Packard Enterprise, NetApp and others in Converged Infrastructure. Updated: December 2024.
Hyper-Converged Infrastructure refers to a system where numerous integrated technologies can be managed within a single system, through one main channel. Typically software-centric, the architecture tightly integrates storage, networking, and virtual machines.
Hyper-converged is typically an "all in one box/rack" solution. It consists of compute, storage & network resources all tied together physically (and through software).
Hyper-converged for a pro - is a complete solution. You don't have to architect it. All you have to know is how much "power" you need (what you want to do with it). While with converged infrastructure (which can still be 'software defined') you have to match and configure the components to work together.
More often then not converged infrastructure is cheaper. You might already have the storage and networking resources, for example. And manufacturers put a premium on packaging the solution together.
The key differences are scale, complexity and ease of use/management.
Converged systems are more complex, assembled from multiple vendors using different management tools which allow systems to scale tremendously (Hundreds of servers and Multiple Peta-Bytes of Storage.
Hyper-Converged systems are smaller, wrapped in a tight, easy to deploy shell that guides the user through system expansion, up to the nodal limits of ~32 servers. Hyper-Converged systems rely on storage integrated into rack mount systems. To assure stability, Hyper-Converged user choices are limited on storage and compute options. These options have been well vetted and are fully supported by the HCI vendor.
@Steffen Hornung Note that in the above document, Nutanix claims a limit of 16 nodes supported under AWS. Supported node scalability is determined by what the vendor or the underlying software maintains as limits. For example, as you note above, VMWare does not support a VSAN cluster greater than 64 nodes. HPE's SImplivity, Dell's VxRail (Based on VMWare) and Cisco's Hyperflex all have 32 node limitations.
As for Enterprise class applications beyond 62TB I see major Epic installations at large, integrated Health Care Organizations and SAP Hana for Major Retailers with Global Supply Chains. That said - under the pre-COVID cost curve, creating compartmentalized applications that are regional in nature with DR/BC sites makes a lot of sense. We just saw Ransomware attacks against cloud based Kronos Payroll Systems, killing them just prior to the Christmas Holidays, throwing customers a headache in their operational models at a time when they already had enough issues to deal with - and did not need to face another major Cyber threat.
From a strategic perspective of application design, it makes sense to compartmentalize. The notion of running your entire business on a single HCI infrastructure is not a best practice and should not be encouraged for the reasons articulated above. Lose the King, lose the game. Best to have a lot of pawns as we deal with the current hybrid cloud and cyber space.
Hyperconverged is a system cluster of at minimum 3 nodes. The system mirrors datas between nodes and runs virtual machines.
Converged systems is anything between the classic server and hyperconverged platform. This converged concept was useful in waiting for hyperconverged development and should disappear in a near future.
converged infrastructure still incorporates hardware, running the technology natively on hardware. On the other hand, hype convergence is fully software-defined and completely integrated
Oh, you cant get rid of hardware in any way. (damn you Apple for auto-correcting english back to german).
But it is true that hci is a software defined approach which has the advantage of delivering new features without new hardware.
Another thing that destinguishes hyperconerged solutions from converged ones is the scale-out nature: simply add more nodes to the system to support new workloads without losing performance because you add all types at once (compute, storage and networking).