What it comes down to is selecting a solution that is right-sized for your workloads and your environment. Making sure that you fully understand what your requirements are - your reads and writes and then building a solution that adequately handles that, while also giving you the flexibility to scale in future should those requirements change. A lot of organizations will purchase off-the-shelf appliances or solutions with many unnecessary features, because it feels like the right thing to do. But unless you live in Beverly Hills, you don't take your kids to school in a Ferrari, so why do the same for your IT?
So ultimately, look for flexibility - does the solution give you the ability to use the hardware you want and need, does it avoid vendor lock-in, and can you scale that solution easily and with minimal cost in future if you need to?
Be as detached as possible from the hardware and a geo cluster without a witness. Today, all solutions normally offer a good level of performance. StarWind offers very good performance while being really detached from the hardware and the HeartBeat link option allows us to not need a 3rd site.
Founder, Professional Services Director, Lead Architect at Falcon Consulting
Real User
Top 20
2021-09-04T23:44:38Z
Sep 4, 2021
Sustainable performance (practical delivered IOPS vs per calculated IOPS, allocated network channels and bandwidth utilisation ratio), security, robustness and recoverability.
Also, capability to interact with networking hardware acceleration such as flow control, IO buffer management and traffic shaping, enterprise storage features like sync/async replication, deduplication, snapshot and snapshot integration, the flexibility of storage volume formation, such as media type and volume structure (RAID level, multi-tiering, multi-level caching).
It is a must-have feature in a reliable software-defined storage solution that it should support a variety of protocols to collect, store and retrieve data. A multi-purpose software storage solution should support iSCSI and Fibre Channel protocols to handle object-level application workloads as well as NFS & SMB protocols for distributed file systems. The multi-protocol capability enables the end-users to generate and scale unified storage pools that configure SAN+NAS storage volumes to support various data types and applications logically.
So the end-users can leverage accelerated performance by integrating cost-effective software virtualization storage. So as your organization's data storage needs grow, SDS allows you to deploy storage volumes without fretting about whether or not those volumes will integrate smoothly with other systems.
2. Cloud Storage Integration:
In any organization, data follows a typical "life cycle". Begin as a "hot" business-critical data, it cools and isn't accessed as frequently. If not managed properly, this life cycle can turn into a big headache in overcrowded high-performance arrays. However, this problem is not a new one but the solution should be advanced enough to sort out this matter efficiently and effectively.
The best software storage solutions allow the end-users to integrate the cloud of their own choice into their virtualized infrastructure seamlessly. So storage professionals can implement strategies to move their cold files between on-prem and cloud storage easily. So they can also manage their data under a unified strategy.
3. Advanced Data Services:
Advanced data services are the premium features offered by advanced and enterprise-level software storage solutions. Vendors like Stonefly SCVM, netapp, datacore etc offer remarkable data services such as snapshot technology, async/sync replication, data deduplication and reliable security features that deliver a considerable benefit.
PreSales Manager at a tech services company with 1-10 employees
User
2020-03-13T16:20:58Z
Mar 13, 2020
You have to know the converged infrastructure vs. the hyperconverged infrastructure, and the differences lie mainly in the SAN in a converged environment, making a SAN a single point of failure, as opposed to an HCI solution in which there is no single point of failure and can reach the same number of IOPs and higher than a converged infrastructure. However, many companies are slow to make the change for the initial investment cost, in the long run, companies choose HCI and know that HCI solution is the right one enters another analysis.
Solutions Architect/Team Lead - Business Data and Data Protection at a tech consulting company with 501-1,000 employees
Real User
2019-11-19T18:47:22Z
Nov 19, 2019
The aspect that is most important depends entirely upon the business objectives and needs of the client. Some need scalability, some need a specific application compatibility, some need specific hypervisors, some need to focus on DR/backup capability. It's not a great question.
Performance :- SDS (Software Define Storage) reduce the performance of physical storage devices like their IOPs. And also perform well with any RAID configurations.
Compatibility :- It can be compatible with all type of hardware/Operating Systems. But I would like to suggest BareMetal installation. Because It reduce licensing and batter system performance.
Security :- It includes all basic's of security encryptions method like lock hard drive or OS in case of wrong Login attempts.
Features:- SDS must have AD integration, Bind drives/Shared Path with network classes/IP /Mac etc (so it makes more secure),Drive Locker etc.
You need to scale out objects in which the system creates and allocates a unique identifier to the object.
You need to evaluate which one creates high available scale-out file share to use with application storage and also, those SDS's that are able to run on the server OS and in VM either on-premise or in the cloud.
CEO and President DataCore Software Corporation at DataCore Software
Vendor
2017-08-19T17:20:13Z
Aug 19, 2017
Start with the economics and in your evaluation criteria, stress not only the new features and capabilities being touted but on how much disruption will it cause to your current environment, does it protect and leverage your existing investments and is it software that can bridge different deployment models (serverSAN, pure software, appliance, hyperconverged or hybrid cloud) since we live in a 'hybrid' world and how much true agility it brings to meet change and growth. To often vendors tout specific new models or features and describe these new 'shiny objects' as panaceas, but the reality is often the new comes with a 'rip and replace' mindset that forgets about existing investments and how to add agility and future-proofing to your infrastructure to readily accept new technologies and absorb them within the overall management versus creating yet another independent silo to manage. Look at the economics and think big picture to avoid stop-gap solutions that actually add complexity and cost.
What is software-defined storage? Software-defined storage (SDS) is a software-based storage solution that provides greater flexibility and independence than the traditional network-attached storage (NAS) or storage area network (SAN). Although software-defined storage can work in and on top of both NAS and SAN environments, it is usually created to perform on the industry common x86 servers.
Software-defined storage allows for separation and independence from traditional hardware...
Performance, Scalability, Reliability and Availability, Compatibility and Integration,Security
What it comes down to is selecting a solution that is right-sized for your workloads and your environment. Making sure that you fully understand what your requirements are - your reads and writes and then building a solution that adequately handles that, while also giving you the flexibility to scale in future should those requirements change. A lot of organizations will purchase off-the-shelf appliances or solutions with many unnecessary features, because it feels like the right thing to do. But unless you live in Beverly Hills, you don't take your kids to school in a Ferrari, so why do the same for your IT?
So ultimately, look for flexibility - does the solution give you the ability to use the hardware you want and need, does it avoid vendor lock-in, and can you scale that solution easily and with minimal cost in future if you need to?
IOPS, restoration, and backups meeting RTO and RPO.
Be as detached as possible from the hardware and a geo cluster without a witness. Today, all solutions normally offer a good level of performance. StarWind offers very good performance while being really detached from the hardware and the HeartBeat link option allows us to not need a 3rd site.
The scalability and flexibility along with the integrations and costs.
Sustainable performance (practical delivered IOPS vs per calculated IOPS, allocated network channels and bandwidth utilisation ratio), security, robustness and recoverability.
Also, capability to interact with networking hardware acceleration such as flow control, IO buffer management and traffic shaping, enterprise storage features like sync/async replication, deduplication, snapshot and snapshot integration, the flexibility of storage volume formation, such as media type and volume structure (RAID level, multi-tiering, multi-level caching).
Rather than evaluating based solely on feature set, make sure the SDS platform will meet all of the business objectives.
1. Multi-protocol capabilities:
It is a must-have feature in a reliable software-defined storage solution that it should support a variety of protocols to collect, store and retrieve data. A multi-purpose software storage solution should support iSCSI and Fibre Channel protocols to handle object-level application workloads as well as NFS & SMB protocols for distributed file systems. The multi-protocol capability enables the end-users to generate and scale unified storage pools that configure SAN+NAS storage volumes to support various data types and applications logically.
So the end-users can leverage accelerated performance by integrating cost-effective software virtualization storage. So as your organization's data storage needs grow, SDS allows you to deploy storage volumes without fretting about whether or not those volumes will integrate smoothly with other systems.
2. Cloud Storage Integration:
In any organization, data follows a typical "life cycle". Begin as a "hot" business-critical data, it cools and isn't accessed as frequently. If not managed properly, this life cycle can turn into a big headache in overcrowded high-performance arrays. However, this problem is not a new one but the solution should be advanced enough to sort out this matter efficiently and effectively.
The best software storage solutions allow the end-users to integrate the cloud of their own choice into their virtualized infrastructure seamlessly. So storage professionals can implement strategies to move their cold files between on-prem and cloud storage easily. So they can also manage their data under a unified strategy.
3. Advanced Data Services:
Advanced data services are the premium features offered by advanced and enterprise-level software storage solutions. Vendors like Stonefly SCVM, netapp, datacore etc offer remarkable data services such as snapshot technology, async/sync replication, data deduplication and reliable security features that deliver a considerable benefit.
You have to know the converged infrastructure vs. the hyperconverged infrastructure, and the differences lie mainly in the SAN in a converged environment, making a SAN a single point of failure, as opposed to an HCI solution in which there is no single point of failure and can reach the same number of IOPs and higher than a converged infrastructure. However, many companies are slow to make the change for the initial investment cost, in the long run, companies choose HCI and know that HCI solution is the right one enters another analysis.
After evaluating all of the requirements, the first and main aspect should be reliability and data integrity.
The aspect that is most important depends entirely upon the business objectives and needs of the client. Some need scalability, some need a specific application compatibility, some need specific hypervisors, some need to focus on DR/backup capability. It's not a great question.
Do a careful POC and make very very sure the solution does not corrupt data when you have a major storage issue like an array failure.
Ease of use, availability, performance and support.
Built-in reliable performance metrics using recognised testing criteria and testing methodology.
Easy to use and configure
Its performances and weaknesses.
As per my understanding we keep in mind about,
Performance :-
SDS (Software Define Storage) reduce the performance of physical storage devices like their IOPs. And also perform well with any RAID configurations.
Compatibility :-
It can be compatible with all type of hardware/Operating Systems. But I would like to suggest BareMetal installation. Because It reduce licensing and batter system performance.
Security :-
It includes all basic's of security encryptions method like lock hard drive or OS in case of wrong Login attempts.
Features:-
SDS must have AD integration, Bind drives/Shared Path with network classes/IP /Mac etc (so it makes more secure),Drive Locker etc.
- limit the payload due to the SDS software, keeping as much as possible the original storage performance.
- Stability: SDS became a critical component of the infrastructure. So it must contribute actively to increasing the number of nine's.
- Support (a very important aspect): it has to be fast, reliable and flexible.
(1) Stability
(2) Support
(3) Performance
You need to scale out objects in which the system creates and allocates a unique identifier to the object.
You need to evaluate which one creates high available scale-out file share to use with application storage and also, those SDS's that are able to run on the server OS and in VM either on-premise or in the cloud.
We were recently looking for a solution with excellent reliability from a vendor that was not likely to disappear or drop the solution.
Also, we were price-sensitive so this played a large factor in our decision-making process.
Company Reputation, Costs, scalability; features for Cloud or DR.
Price and support for when problems happen.
Start with the economics and in your evaluation criteria, stress not only the new features and capabilities being touted but on how much disruption will it cause to your current environment, does it protect and leverage your existing investments and is it software that can bridge different deployment models (serverSAN, pure software, appliance, hyperconverged or hybrid cloud) since we live in a 'hybrid' world and how much true agility it brings to meet change and growth. To often vendors tout specific new models or features and describe these new 'shiny objects' as panaceas, but the reality is often the new comes with a 'rip and replace' mindset that forgets about existing investments and how to add agility and future-proofing to your infrastructure to readily accept new technologies and absorb them within the overall management versus creating yet another independent silo to manage. Look at the economics and think big picture to avoid stop-gap solutions that actually add complexity and cost.
As necessidades de negócio são o foco.
Geralmente ao avaliar SDS buscamos: Resiliência, Gerenciamento Simplificado e Performance.