Try our new research platform with insights from 80,000+ expert users
PeerSpot user
System Administrator - Backup & Storage Specialist at METRO SYSTEMS Romania
Consultant
It provides very good storage High Availability and data protection. The thing we'd like to see the most is the possibility of pairing LAN/SAN ports from different nodes.

What is most valuable?

What impressed me the most about these systems are their excellent reliability, ease of administering (both in GUI and command line), and their very good documentation that is easy to access and understand. It provides very good storage, High Availability, and data protection by employing the use of two separate storage controllers that can take over each other's role as soon as any of them goes down. The technology has been improved even more after the introduction of the cluster cDot ONTAP OS.

How has it helped my organization?

NetApp systems are a good choice if you want a versatile unified system that's also capable of delivering performance. Our company has been using NetApp filers both as file sharing solutions (CIFS over LAN) and also as block storage (LUNs) for VMware ESXi hosts.

Since we switched to the newer 2552 models, we now benefit from better data protection and improved storage capacity thanks to the clustered Data ONTAP OS.

What needs improvement?

The thing we'd like to see the most is the possibility of pairing LAN/SAN ports from different nodes. Currently, the systems only provide pairing (and thus redundancy) only at same-node level. Also, it wouldn't hurt having this sort of cross-functionality when it comes to choosing disks for aggregate structures. Right now, you can't integrate in the same storage aggregate disks from different shelves.

For how long have I used the solution?

I've had the chance to work a lot with NetApp FAS 2552 series and also have some experience with older models such as 2050, 2040, 3240 and 2240. I think it's a pretty reliable unified storage solution. The FAS 2552 model, especially, offers good performance and excellent reliability. My experience with similar storage systems is, currently, somewhat limited however.

My company has been using NetApp for a few years now, over four I think, and I have come into contact with this technology for over a year.

Buyer's Guide
NetApp FAS Series
December 2024
Learn what your peers think about NetApp FAS Series. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.

What was my experience with deployment of the solution?

When it comes to deployment we had our share of issues. Some of these issues are to blame on the vendor's lack of experience with the new models and ONTAP versions, but sometimes the systems themselves were faulty.

What do I think about the stability of the solution?

The most recent issue we had involves a LAN card that couldn't be set on the correct bandwidth setting. In consequence, the vendor had to replace one of the node's motherboard.

What do I think about the scalability of the solution?

There have been no issues with scaling it, other than during the actual deployment of new devices.

How are customer service and support?

If you buy NetApp systems from third-party vendors, then you would be surprised that their technicians aren't exactly up to date with the latest ONTAP versions. NetApp releases new versions (with great improvements) so often that it's hard for some vendors to stay up to date with their technical knowledge base.

However, when it comes to technical support from NetApp directly, they tend to have a very competent team and the reaction time is pretty decent. Perhaps their biggest strong point in this chapter is their public knowledge base which helps you solve on your own most of issues you can encounter with configuring and administering.

How was the initial setup?

All I can say is that if you take your time and study the NetApp documentation, you shouldn't have any issue, provided the initial setup was done properly by the vendor technician.

What about the implementation team?

Initial setup is usually performed by NetApp or the third-party vendor from whom you purchased the devices. Our experience with third-party vendors isn't the best due to reasons stated above. All other configuration and administration is done in-house.

What's my experience with pricing, setup cost, and licensing?

When it comes to software licensing, I think that NetApp promotes a very fair system. Basically you only pay for the features you need (eg.: Cluster Mode, SnapMirror, SnapVault, etc.).

What other advice do I have?

The best advice I can offer is to try and purchase it directly from NetApp in order to have a better chance of having a successful initial configuration from the first try. Also, make sure you purchase the system with a General Availability OS version as Release Candidate ones tend to be bugged.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user332652 - PeerSpot reviewer
Storage Adminstrator at SRPNet
Vendor
It has the capability to use SAN, so it has a broad spectrum of use. I'd like to see more cohesiveness with a unified manager.

Valuable Features

  • Software features, such as being able to do snapshots and file system optimization
  • High Availability -- components fail so this is a nice feature to have when failing over. There's no downtime, so we don’t lose data.

Improvements to My Organization

Good bang for the buck. Also, we use NFS generally, but FAS has the capability to use SAN, so it has a broad spectrum of use.

Room for Improvement

Tough for me to answer because I’m limited in my role, but the one thing I’d like to see most is more cohesiveness with a unified manager. I like the end product, but it’s not really all integrated and is convoluted with different managers. I would ike a single pane of glass, a single dashboard.

Deployment Issues

We see a lot of bugs in roll outs, and sometimes I think the first GA are late-beta deployments. My impression is they could have let it bake a little longer. But it could also be because of some of the environments it deploys in.

Stability Issues

Snap Manager v3.3.1 is a little buggy and NetApp doesn’t offer training course on it. So it could be what I’ve been taught by other people, or it’s in fact buggy, but likely a little of both. Hopefully they made improvements on 3.4.

Scalability Issues

7-mode scales very well. I’m even more impressed with where they intend to go with cDOT, but it may be rolled out prematurely.

Customer Service and Technical Support

Tech support is usually pretty good, but occasionally there are some things that occur only on our site that tech support has issues.

Other Advice

Plan ahead and make sure you right-size it. How much head room do you really need? How many spindles are you going to attach? Are you really going to share workloads or do you want to separate some of those? We don’t segregate our infrastructure, which I don’t like, but all that costs money. But you should make sure that you have failover.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
NetApp FAS Series
December 2024
Learn what your peers think about NetApp FAS Series. Get advice and tips from experienced pros sharing their opinions. Updated: December 2024.
824,067 professionals have used our research since 2012.
it_user330081 - PeerSpot reviewer
Principal Computer Engineer with 1,001-5,000 employees
Real User
Since implementation, our performance has definitely increased, but they're upgrading the performance monitoring tool, which is the main thing I think needs improvement.

Valuable Features

I think that the flexibility with the volume, resizing, and performance.

Improvements to My Organization

I think that our performance has definitely increased.

Room for Improvement

I think that they are upgrading the performance monitoring tool, which is the main thing I think needs improvement. From version to version they are changing, and you want to see things improve – I think we will continue to see more and more benefits.

Use of Solution

We have been using it since 2013.

Stability Issues

Pretty solid in terms of stability.

Scalability Issues

We haven’t really grown it but I see a roadmap, the only problem there may be cost. It’s not an expensive product per se, but because of budget issues. People sometimes don’t evaluate the cost correctly.

Customer Service and Technical Support

NetApp overall has been really good in terms of technical support.

Initial Setup

Initial setup was hard a year ago, but now we just did another setup and everything was smooth. It’s gotten a lot better in the last year we’ve been using it.

Other Advice

If you are on the fence it’s been a very good product, you don’t want to build your own solution, you want to use the appliance for the flexibility. Overall performance has gotten a lot better.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
The interesting thing about VVOLs is that not all implementations will be equal. It puts more responsibility on the array by moving storage operations to it that were previously handled by vSphere.

More information on VVOLs is being released every week and it is only now that we are getting a chance to play with the full release code that we are able to dig into the detail of how it works. Let’s start off by exploring the benefits of VVOLs that are likely to make it game changing technology:

Granular Control of VMs

  • Enable VM granular storage operations on individual virtual disks for the first time including control of the following capabilities:
    • Auto Grow
    • Compression
    • De-duplication
    • Disk Types: SATA, FCAL, SAS, SSD
    • Flash Accelerated
    • High Availability
    • Maximum Throughput: IOPS & MBs
    • Replication
    • Protocol: NFS, iSCSI, FC,FCoE

Enhanced Efficiency and Performance

  • Off-load VM snapshots, clones and moves to the array
  • Automatically optimise I/O paths for all protocols
  • No VMFS, therefore
    • Virtual disks natively stored on the array
    • Datastore space management is not required
    • Size limits are dictated by the guest and array
    • Zeroing, either on disk creation or use, is not required
    • vSphere UNMAP, when a VM is deleted, is not required
    • Guest UNMAP commands are passed directly to the VVOL
    • Thin-provisioning is managed by the array
  • Minimise LUN and path consumption, NFS mount usage, and LIF count and IP address consumption

Automated Policy Based Management

  • Create a library of reusable storage profiles
  • Match the profiles to storage capabilities
  • Provision VMs using storage profiles
  • Alert when a VM no longer conforms to the profile

To get VVOLs up and running you need cDOT 8.2.3 or above, Virtual Storage Console 6.0 and VASA Provider 6.0 – for more background information see A deeper look into NetApp’s support for VMware Virtual Volumes.

The On-Demand engine

One of the best kept secrets of cDOT 8.3 was the inclusion of the On-Demand engine which consists of the following new commands:

  • Single-File Move on Demand (SFMoD)
  • Single-File Copy/Clone on Demand (SFCoD)
  • Single-File Restore on Demand (SFRoD)

When a command is triggered, data access at the destination begins immediately, while in the background the data is copied or moved from source to destination. The commands cannot be directly invoked, rather other operations take advantage of them (i.e. VVOLs and LUN moves). So when the policy of a VVol is changed that results in it needing to be moved from one volume to another (even across controllers) the On-Demand engine non-disruptively moves data access from the source to the destination instantly. All writes go to the new destination and, while the data is being copied from the source, reads are redirected back to the original volume as required. If a VVOL is migrated elsewhere in the cluster, a rebind operation automatically changes the I/O path to the new closest PE, maintaining optimum performance and reducing complexity and latency.

Not all VVOLs implementations will be equal

The interesting thing about VVOLs is that not all implementations will be equal, as it puts more responsibility on the array by moving many storage operations to it that were previously handled by vSphere – you therefore need an array that provides efficient:

  • Thin-provisioning
  • Snapshots
  • Clones
  • Non-disruptive VM mobility

The current snapshot technology in VMFS is to say the least very poor – best practice is to have no more than 2-3 snapshots in a chain (even though the maximum is 32) and to use no single snapshot for more than 24-72 hours – the reason is simple, storage performance will suffer if you create a snapshot on a VM. So if an array supports VVOLs and we can off-load snapshot and clone creation to the array then we have surely solved the problem and we can then keep 100s of snapshots. As always it is not so simple – if the array uses inefficient CoW snapshots then you will not gain much over the standard vSphere snapshots. Thin-provisioning is another area whereby some arrays do it very efficiently, but many suffer a significant performance drop unless thick LUNs are used.

The nice thing about FAS is that it has excelled at the first three points above for many years and the last point has been introduced with the On-Demand engine in cDOT 8.3 – there are plenty of arrays on the market that will be enabled for VVOLs, but they will not be able to claim efficient support for these features without massive re-engineering work.

Other points of note

It is essential to backup the VASA provider VM, this can be achieved using the in-built backup capabilities of the array using one of the following options:

  • The backup and recovery features of VSC
  • The built-in scheduled FlexVol snapshot copies

NetApp All-Flash FAS has emerged as the first storage array to successfully complete validation testing with Horizon View 6 with VVols.

The VADP APIs backup vendors use are fully supported on VVOLs therefore backup software using VADP should be unaffected.

For a detailed breakdown of vSphere product and feature interoperability with VVOLs click here

Get hands on with VVOLs on FAS

If you would like to gain a detailed understanding of how the technology works we have created, in conjunction with VMware and NetApp, a series of demo café events – to find-out more click here.

VVOLs is certainly interesting technology and I am sure what we have today is only the beginning of the journey and it is going to be interesting to see how it develops over the coming years – we know for sure that NetApp will be making improvements to cDOT to enable things like replication to be set at a VVOL level.

What do you think – is VVOLs as game changing as VMware thinks?

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
it_user3396 - PeerSpot reviewer
it_user3396Team Lead at Tata Consultancy Services
Top 5Real User

Saluting Mark:>)

Henry

PeerSpot user
Solutions Architect with 51-200 employees
Vendor
We have flash caching, but it would be nice if we could move data between flash, SAS and SATA drives.

As we move into the world of Software-Defined Storage it “sticks out like a sore thumb” when an array vendor only makes new software releases available on their next generation hardware. The problem with this is that even if you purchase at the very beginning of the life cycle of a product, at best you will get one round of feature enhancements, after that all software development is focused on the next generation product. This often even includes support for new drive types – again they are only supported on the latest generation hardware.

This problem is very evident when it comes to support for VMware Virtual Volumes – any array vendor that will be releasing new hardware next year is unlikely to provide support for Virtual Volumes on their currently shipping product. My view is that the industry cannot continue like this and instead they need to make sure new microcode versions and drive technologies are supported on the current shipping product and at least the previous generation – without this there is a real danger that your new storage array becomes obsolete shortly after purchase.

The good news for NetApp customers is that Clustered Data ONTAP (cDOT) meets my criteria above, so the recently announced version 8.3 will not only work on 2014 generation hardware (2500 and 8000 series), but previous generations as wells.

So what’s 8.3 all about?

Major features

  • MetroCluster support – to enable continuous availability
  • SnapMirror to Tape (SMTape) – simplifies and speeds up backup to tape
  • Virtual Volumes support – enables native storage of VMDKs (requires vSphere 6)

Efficiency enhancements

  • Combined SnapMirror and SnapVault – so that you only need to send the data once, rather than having separate SnapMirror and SnapVault copies
  • SnapMirror and SnapVault Compression – traffic can now be optionally compressed to reduce bandwidth requirements
  • Root Drive and Flash Pool Partitioning – no longer requires Root Vols and Flash Pool drives to be dedicated to a single node therefore provides better capacity utilisation
  • Flash Pool enhancements – caches overwrites larger than 16K and compressed blocks, increases the usable capacity, and supports much larger pool sizes (up to 4x)
  • Inline zero write detection and elimination – so host disk zeroing activity does not consume I/O or capacity
  • Significant performance improvements – further multi-core, SSD random read, CIFS, replication and cloning optimisations

Migration and Administration Tools

  • 7-Mode Transition Tool – now supports SAN as well as NAS
  • Foreign LUN Import (Offline) – to simplify 3rd party (EMC, HDS, HP) SAN data migration
  • LUN migration – whereas previously an entire volume could be non-distributively moved around the cluster it can now also be performed at the LUN level
  • Disaster Recovery fail-over – to a specific point-in-time snapshot copy at the DR site for recovery from mirrored corruption
  • Automated Non-disruptive Upgrade – requires just 3 commands to upgrade an entire cluster

8.3 is a milestone release for NetApp as it is the end of development for 7-Mode as 8.3 only includes the cDOT build. Overall I think NetApp are finally in a good place with cDOT and they can now put the 7-Mode platform behind them and focus on innovating.

So what would we like to see in the next version of cDOT?

  • SnapLock (for retention and compliance) – the last remaining major feature to be ported over from 7-Mode
  • Erasure coding – to enable rapid drive rebuilds
  • Sharing of drives across controllers – we are already starting to see this with the new drive and Flash Pools partitioning features
  • Detaching of the drives from the controllers – so that the failure of an HA pair within a cluster does not result in downtime
  • Controller based Flash modules – in place of Root Vol drives
  • Advanced QoS – to enable setting of Service Level Objectives rather than just limits
  • Automated Tiering – we have flash caching, but it would be nice if we could move data between flash, SAS and SATA drives
  • Integrated file archiving – to move older files to secondary storage or the cloud
  • Encryption – provided by the controllers rather than drives
  • MetroCluster granular fail over – so volumes or even Virtual Volumes can be “moved” between sites
  • MetroCluster IP replication – either using FCIP bridges or native IP connectivity
  • MetroCluster Active/Active – so volumes/LUNs can be active on both sides of the cluster
Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
Policies are applied to FlexVols but it would be useful if they could also be specified for an individual Virtual Volume.

Virtual Volumes is the flagship feature of vSphere 6.0 as they enable VM granular storage management and NetApp FAS running Clustered Data ONTAP 8.3 is one of the first platforms to support the technology.

Today storage administrators have to explain to the VM administrators how to identify which datastores to use for each class of VM, which is typically achieved using a combination of documentation and datastore naming conventions – however, consistency and compliance are difficult to achieve.

Virtual Volumes changes this by enabling the storage administrator to provide vCenter with detailed information on the capabilities of each datastore. VM Storage Policies, whilst they existed in previous versions of vSphere were not sophisticated enough to query the actual storage for its capabilities, the VMware APIs for Storage Awareness (VASA) Provider 2.0 resolves this problem. Now the VM administrator can create VMs using Virtual Volumes and use the VM Storage Policy wizard to easily determine which datastores are compatible with its needs.

What components are required for Virtual Volumes?

VASA Provider (VP)The NetApp VP is deployed as an OVA virtual appliance and is managed by the Virtual Storage Console plugged in to the vSphere Web Client. VMs running on Virtual Volumes require that the VP is running in order to create the swap Virtual Volume at power on – the VP should not be running on Virtual Volumes since it would be dependent on itself.

Storage Container (SC)A SC is a set of FlexVol volumes used for Virtual Volume datastores. All the FlexVols within a SC must be accessed using the same protocol (NFS, iSCSI, or FC) and be owned by the same Storage Virtual Machine (SVM), but they can be hosted on different aggregates and nodes of the NetApp cluster.

Protocol Endpoint (PE)The IO path to a Virtual Volume is through a PE with the Virtual Volume bound to the PE through a binding call managed by the VP. The VP determines which PE is on the same node as the FlexVol containing the Virtual Volume and binds the Virtual Volume to that PE.

For block protocols, a PE is a small (4MB) LUN, and the VP creates one PE in each FlexVol that is part of a Virtual Volume datastore. The PE is automatically mapped to initiator groups created and managed by the VP.

For NFS, a PE is a mount point to the root of the SVM and is created by the VP for each data LIF of the SVM using the LIF’s IP address. The PE is automatically created when the first Virtual Volume datastore is created on the SVM along with the appropriate export policy rules.

Storage Capability Profile (SCP)A SCP is a set of capabilities for a volume or set of volumes and may include features such as availability, performance, capacity, space efficiency, replication or protocol.

How could things be improved in the future?

Today De-duplication, Compression, SnapMirror and SnapVault policies are applied to FlexVols – it would be useful if they could also be specified for an individual Virtual Volume, which in turn would enable MetroCluster to non-disruptively “move” an active Virtual Volume from one site to another.

It is great to see that NetApp is ahead of the game with regard to support for Virtual Volumes – it is also nice to see that the 8.3 release can be installed on older versions of hardware allowing FAS customers, who purchased their systems a number of years ago, to take advantage of Virtual Volumes.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
PeerSpot user
Solutions Architect with 51-200 employees
Vendor
NetApp do have a “pure” block storage array but it lacks the advanced data services enabled by WAFL

For many years traditional storage array vendors have claimed that their platforms are superior for block storage than NetApp FAS because they do not have the overhead of a Pointer-based Architecture – let’s explore this in more detail:

What do we mean by “pure” block storage?

Uses a Fixed Block Architecture whereby data is always read from and written to a fixed location (i.e. each block has its own Logical Block Address) – in reality most block storage arrays provide the option to use pages (ranging from 5 MB to 1 GB) where the LBA is fixed within the page, but the page can be moved to facilitate tiering.

The advantages of this architecture are:

  1. No performance overhead – it is very easy for the storage array to calculate the location of a block and there is no metadata to cache
  2. No capacity overhead – as there is no additional metadata to manage
  3. No fragmentation – blocks always remain together which enables good sequential IO performance on HDDs
  4. Lends itself to tiering – to automatically place data on the most appropriate drive

The disadvantages of this architecture are:

  1. Advanced data services – cannot be supported:
    1. Granular De-duplication, Compression and Thin Provisioning – typically 4K-32K
    2. Low-overhead snapshots – using Redirect-on-Write rather than Copy-on-Write
    3. Hypervisor technologies like Virtual Volumes (VVOLs) – as VMDKs need to be stored as objects/files
  2. Write performance overhead – especially when using parity RAID (i.e. R5 or R6)
  3. Replication performance overhead – when based on snapshots (as snapshots have a significant overhead)
  4. Separate block and NAS – NAS requires a separately managed file system to be laid on top of the block storage

How does NetApp FAS compare?

FAS uses a Pointer-based Architecture, utilising 4K blocks which can be located anywhere, called WAFL therefore we have to reverse the above list of advantages and disadvantages. NAS based file systems are delivered along with block storage on top of WAFL – block protocols do not sit on top of the NAS protocols instead they interact directly with WAFL.

The good news is that WAFL has been around since 1993 so it is a very mature and highly optimised technology – retrofitting advanced data services to a “pure” block storage array is not straight forward and requires major re-engineering work.

So which is best?

Well we can debate this endlessly and clearly depending on your use case one may be a better choice than the other – 5 years ago this was a valid debate, but to be honest it is a moot point as today all storage platforms have to support the advanced data services listed above and therefore need a Pointer-based rather than Fixed Block Architecture.

Let’s explore some examples of this:

  • VMware
    • Virtual SAN – version 2 will include the Virsto Pointer-based Architecture to enable RoW snapshots and clones, and moving forward many more of the advanced data services
  • EMC
    • VNX/VNXe – uses an 8K Pointer-based Architecture to provide RoW snapshots, De-duplication, Compression and Thin Provisioning
    • XtremIO – uses an 8K Pointer-based Architecture to provide RoW snapshots, De-duplication, Compression and Thin Provisioning
    • VMAX3 – uses 128K tracks to provide RoW snapshots and Thin Provisioning, and in the future support for VVOLs
  • HDS
    • HNAS – uses a 4K/32K Pointer-based Architecture to provide RoW snapshots, De-duplication and Thin Provisioning
    • VSP G1000 – the new Storage Virtualization Operating System (SVOS) was built with VVOLs in mind

It is also worth pointing out that all of the start-up storage vendors that have come onto the market in the last 5 years do not have “pure” block storage platforms – it would just not make sense if they did.

What is interesting is that NetApp do have a “pure” block storage array – the E-Series which provides excellent price/performance, but it lacks the advanced data services enabled by WAFL – also VVOLs support is not expected for some time.

So for me “pure” block storage is no longer sustainable and dismissing products like NetApp FAS because they are not “pure” block no longer makes sense. Moving forward the issue is not that your storage platform has a ground-up all-flash design, but does it have a ground-up Pointer-based Architecture.

“Pure” block storage is dead – long live WAFL and the like.

Disclosure: My company has a business relationship with this vendor other than being a customer: We are Partners with NetApp.
PeerSpot user
it_user186357 - PeerSpot reviewer
it_user186357Solutions Architect with 51-200 employees
Vendor

Please post any questions at blog.snsltd.co.uk

Best regards
Mark

See all 2 comments
it_user527118 - PeerSpot reviewer
Systems Engineer at a media company with 1,001-5,000 employees
Vendor
Supports anything from Windows CIFS shares to UNIX NFS shares to block-level storage.

What is most valuable?

Its flexibility: It will support anything; from Windows CIFS shares, to UNIX NFS shares, to block-level storage; on the same platform; on the same disks; with the same interface. It's not specific to one set or another set.

How has it helped my organization?

I'm not sure that I can comment. This is what we've always used, so I have nothing to compare it to.

It has a steep learning curve. Once you've reached the top of that curve, though, it's much easier to manage since it is all in the same system. You don't have a separate system to manage block-level storage or a separate system to manage other types of shares.

What needs improvement?

One of the issues that we have had with NetApp in upgrading over the years is that migrating data from one system to another is one-way only. If you have a new storage system that is going to replace an old storage system, where you're transitioning slowly from one to another, you can copy the data in one direction, but that same tool, which is typically used as a disaster recovery tool, can't be used to reset it back the other direction, as well. That level of backward compatibility would be very nice to have.

What do I think about the stability of the solution?

In the 20-some years I've been working with NetApp stuff, the system has caused one outage. Other than that, for any of the failure that it's had, the redundancy that is built into it, has handled the failure and left the systems up and the data available.

What do I think about the scalability of the solution?

It is very scalable. It has gotten much more scalable. With every level, it's becoming more and more scalable.

How are customer service and technical support?

Technical support directly from NetApp is usually very, very good. As compared to others, the expertise that the individual that you talk to on the phone is usually very good. You can talk directly to an engineer, if that's required. We’ve actually talked to hardware development people on occasion, when that has been required.

The support team is very knowledgeable and very accessible.

Which solution did I use previously and why did I switch?

We invest in a new solution when the existing solution goes out of its initial support. We have been looking for new options for about six months now because the extended support is very, very expensive.

How was the initial setup?

I was involved in migrations from one system to another system. The initial setup, the cabling, the hardware side of it is tedious.

Have NetApp come in and do the initial install of the physical system for you. It's definitely worth the time.

Which other solutions did I evaluate?

We are looking at Pure Storage. We have looked at and discarded an EMC option. That's why I recently attended a NetApp conference. We were looking to see the next level of the NetApp All-Flash FAS.

We rejected the EMC option because we had an EMC piece of gear in-house that had a failure. It continued to operate, like it's supposed to. The problem was that the part on the piece of EMC gear that failed could not be replaced without causing downtime. It might as well have just caused the downtime initially. We have migrated everything off of that. It was a stupid little thing. It wasn’t like the backplane failed; it was a stupid little thing. I would not recommend it, and we will probably never go with EMC again.

What other advice do I have?

Take your time. It's a very dynamic market right now. Make sure that the information that you're getting on the system is for what's currently available and not for what they're expecting to have next quarter. Because, a lot of the next-quarter stuff is vapor, where they don't actually have it. They haven't gotten around to putting that in place, yet, and they promise and promise until they get money from you.

That's one of the reasons why we're holding off on making a decision until the gear is actually available.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user