We all know that it's important to conduct a trial and/or proof-of-concept as part of the buying process.
Do you have any advice for the community about the best way to conduct a trial or POC? How do you conduct a trial effectively?
Are there any mistakes to avoid?
Using the random,mixed I/O model, overwrite it multiple times to make the GC work
If the AFA supports deduplication and compression, the data reduction ratio should be kept within a reasonable range
The amount of data tested should be real, and the space allocated after deduplication and compression should more than 60% of the useable capacity
Depending on your situation, if your trial is to find out about the speed of applications and throughput then have the mfg provide a demo and move a host to the array. Preferably a test host and simulate your users.
If however, your trial is more for reliability and fault tolerance that can be accomplished from a demo unit without having to install on your network. Test multi-drive failures and entire shelf failures. Check recovery speed and or ability.
The bottom line knows exactly what your goal of the trial is and understand the technology behind it. All flash arrays are fast and provide serious throughput. But none of that matters if failures are not easily recovered or avoided.
Pitfalls. Local drives in hosts tend to mask the potential speed of some applications. Custom apps can have a not so beneficial result from the huge speed increase especially legacy apps. Expansion and drive replacement should be well understood and planned for. Lastly, the amount of usable space you think you need will most likely double or more by the time you install. Purchase more than you need regardless of features like " Live De-dupe.". Don't fall for the sales hype of how much space you will save. I promise it will get used.
Frankly, attempting a trial with a capex item of this value is almost impossible. I don't know of any vendors who would permit a trial of such an item - not to mention the expense and effort to configure and then fully and properly test something like this.
I would suggest that you review your existing load(s), and then examine the published performance characteristics of the array's you're interested in.
Once you have your baseline criteria, add 30% - 50% to capacity and demand in order to ensure that you have headroom until your depreciation is reached, then choose based on price/performance after researching any independent feedback regarding your target device.
Also... perform at least one site visit to someone already using one.
The best way is to determine the following -
1. What will the AFA be used for - workload type, application, etc.
2. What is the workload required as noted previously as evaluating a rocket ship over a car is a big difference in pricing, IOPS, etc.
3. Test multiple vendors - as many as possible that offer the solution required for the PoC as it will help with pricing in the end.
4. Test, test, test - ensure to put the AFA solution through its paces when testing so you can ensure the one you purchase is the one you need.
There are many vendors in the AFA space and not all are equal. Be sure to research while conducting testing.
The best advice I can offer is to clearly define the business objectives for the POC. Just trialing AFA without a specific goal is the mistake to avoid. For instance if you required workload (if known) is 50,000 IOPS and 1MS response time for 1000 concurrent users... it is not very meaningful to evaluate systems that provide 2million IOPS and 0.5MS response time.
If you can run real workload and inject predictable load would be great.
Your requirements need to be clear. Is your device going to also do replication and if so, then you need to test at least 2 devices. What are you looking at... is it the F or FX and what is your roadmap? What will you be requiring in the next purchase after this one? What has been your storage growth and how long do you expect this device to last? What is your maintenance cost after acquisition and can you get confirmation of this as part of your purchase?
Yes, to test this requires a lot of time and effort. Alternatively, you can do a POV i.e. procure pending a successful test/use of the device. Make sure that the guarantees are signed by the HW supplier and not the salesman.
The best way is to contact your VAR/MSP/Distributor or your direct contact inside the Hardware Manufacturer to ask for a POC / Try and Buy. It is important that you know your goals and what you want to test. The period of testing is normally 30 to 60 days. Normally there is no problem to get a T&B approved. But as i said, it is important to know what you want to test. Performance is mostly not a deal because they are all fast. Differentiators are perhaps latency, connectivity, the use of NvMes or SSD´s etc.
A PoC is a sure fire way to ascertain if a product is fit for your use case and environment. There is a significant amount of work that needs to go before the initiation of a PoC and there are quite a few gotchas. A PoC by definition is a set of tests undertaken to prove/validate something. On that note before commencing anything, you need to:
1. Define in clear terms what your use case is. What are the workloads you will look to run in prod if this box was inducted. Try to get a subset or same workloads run in a PoC. The is to make the PoC environment reflective of your production.
2. Define your expectations and make it as granular as possible. Words such as performance mean nothing unless you put down metrics along with them. What are the architecture principles you want to prove – availability, scalability, resiliency etc. For each of these there needs to be a granular definition of your expectations. For performance things such as IOPS, latency can be an example.
3. Do the best you can to gather as much data as you can about the current environment to draw a baseline. This is one of the gotchas that never get done. The environment definition should be done using the same metrics as you will use in the PoC environment. The idea is that after completing the PoC, when you do a comparison we need to end up comparing apples to apples and not with oranges. Leverage your historical monitoring data to create a performance map of your current environment.
4. For each one of these principles, create a set of test cases that will validate fitment. Next for each test case, you need success criteria and expected response.
5. Identify the tool suites you will use. This may be generic ones and ideally should also relate to the use case you are looking at. Examples include hammerdb for databases, iometer for storage, login VSI for VDI etc.
6. Run each test diligently (systematically). For performance tests particularly around benchmarking, there are specific recommendations (IOMeter for example) that you need to follow. Run performance tests for at least 120mts ( I prefer longer!!>). That will make the results more meaningful.
I would recommend also to work with a trusted reseller/VAR/MSP that has worked with multiple types of storage, tell them your goals and they can tell you some real world results they have seen and why one approach/solution might be better in their case.