- Performance
- Contingency, failover, and data recovery
- It's a good vendor.
They have always been really supportive, easy to get ahold of, and easy to work with.
The primary use case for All Flash is improved performance.
They have always been really supportive, easy to get ahold of, and easy to work with.
The primary use case for All Flash is improved performance.
Simplifying the solution for performance, though they are already working on it. Also, making the UI more user-friendly couldn't hurt.
Over five years.
It's very stable. We haven't had any problems in our environment.
It is very easy to scale.
We have a good relationship with our representatives through them. Our sales representative gave us a lot of information as far as moving forward with upgrading stuff.
Technical Support:It has been used quite a few times and we always have always had a good response from them. They are very knowledgeable.
It was very straightforward.
We use both block and file storage.
NetApp is the leader in the field for high performance and storage systems. They have always been our primary go to. We are more likely to consider NetApp for mission critical storage systems based on our experience.
Advice for someone looking at similar products: Just do the research beforehand and you'll be able to tell what vendors separate themselves from the rest as far as other companies' reviews out there. I would definitely recommend NetApp All Flash FAS.
Most important criteria when selecting a vendor: compatibility and communication. Being able to rely on them whenever we need them.
The benefits are automatic; the power consumption is very low with the All Flash and the performance is very high. So, it helped us to better serve our customers to do the VMware data source.
The scale up version of it is the most valuable feature. You can go to 24 nodes, which is very cool. We are primarily using VMware environment. We use it for VMware data source for our hosting customers. We have 32 petabytes of data on NetApp's storage, so we definitely use it for primary storage.
Going forward, I would like more performance analytics on it, on the area itself, instead of using some other tool.
It's very stable.
We have a 9.1 operating system on it, and it's very stable. We did an upgrade online, and we had no issues. We did a failover testing, and nothing. It's solid.
The scalability is good.
I use it for small issues, like how to configure using multiple VLANs. It was pretty easy to set up, and the technical support were very good.
We decided, as a company, to not buy any more disk storage for our primary customers, and that's the reason we needed All Flash. NetApp was a perfect fit because we could grow as we needed and it scales out the architecture works for us. We were looking for a high-performance, small, low footprint block rate, and NetApp fits in right there.
Very straightforward. NetApp already does all the installation for us. They just come in and set the IPs, etc.
It's a pretty solid solution. If you're looking for a block solution, or file solution, on flash, you definitely have to look at it.
We have a vast NetApp experience, so the fact that it can be managed like the others is great. It has the most consistent performance for storage for VMware. We were also specifically looking for an all-flash system.
It took only a very short time to implement, as it was live just a few hours afterwards. It also integrates well with our environment, specifically with disaster recovery, high availability, management, performance, and historic and current performance metrics.
Most of the things we were waiting for are already in this version, so I’m not really waiting for any new features. It could improve on the initial learning curve, as it can be steep.
We've been using it for one-and-a-half months with 1,500 VMs exclusively as the VMware backend. It's a mix between Windows and Linux-hosted OS. We've been running clustered Data ONTAP since April 2014.
We've had no issues with deployment.
Thus far, it’s 100% stable.
Scalability is quite good in an NAS environment, and in a SAN it's good enough.
Technical support is very good, 8/10.
We also have IBM products, but we chose NetApp instead because IBM does not have the necessary plugins for integration with vSphere.
If you are new to NetApp, it is a bit complex, but if you know the system, it is quite simple. There is definitely a learning curve, but every NetApp system works the same, so if you know NetApp, it’s quite easy.
It performs like we expect and is stable. Always do a proof of concept, and if you go with AFF, especially for a VMware environment. Also, opt for OnCommand InSight software for performance metrics and recommendations.
The Register wrote a damning piece about NetApp a few days ago. I felt it was irresponsible because this is akin to kicking a man when he’s down. It is easy to do that. The writer is clearly missing the forest for the trees. He was targeting NetApp’s Clustered Data ONTAP (cDOT) and missing the entire philosophy of NetApp’s mission and vision in Data Fabric.
I have always been a strong believer that you must treat Data like water. Just like what Jeff Goldblum famously quoted in Jurassic Park, “Life finds a way“, data as it moves through its lifecycle, will find its way into the cloud and back.
And every storage vendor today has a cloud story to tell. It is exciting to listen to everyone sharing their cloud story. Cloud makes sense when it addresses different workloads such as the sharing of folders across multiple devices, backup and archiving data to the cloud, tiering to the cloud, and the different cloud service models of IaaS, PaaS, SaaS and XaaS.
But if we take a look at all these cloud offerings and also computing platforms in our own server room or in the data center, the on-premise infrastructure, the data landscape is NOT coherent. The data flow is not in harmony, and it is not congruent. If we imagine data as water, there is hindrance of data movement as it moves from one stage to another in the data lifecycle. This applies to almost every storage, system or cloud vendor today.
Even worse, organizations lose the control of the data along the way. When data moves out of an on-premise data center to the cloud, IT is almost passing off a large amount of control of their data to the cloud service provider.
Remember the Nirvanix story about 2 years ago? When Nirvanix went belly up, customers of theirs went to a panic mode. They were asked to remove their data within 2 weeks! One customer of Nirvanix had 20PB stored in the Nirvanix Storage Delivery Network. How the F do you think that customer would have felt in that whole Nirvanix fiasco?
This is exactly what I mean about losing control of data.
As Cloud Computing gains a much deeper foothold into IT, the data landscape does not change. The data lifecycle does not change. Data still moves from an active stage to a passive stage, and perhaps back to the active stage when needed. Along with the data movement though its lifecycle, the value of the data changes as well.
That is what the NetApp Data Fabric can do for data in any organizations. A single data management architecture that is able to have data transcend from on-premise data platforms on NetApp (or 3rd party platforms using NetApp FlexArray) to the data platforms on hybrid clouds in cloud service providers and on to the data platforms of hyperscalers, and back. All these data movement is secure, and more importantly, allows organizations to maintain control of their data, wherever it may be residing.
I have put my views of NetApp Data Fabric in the picture below (pardon my Powerpoint skills).
The underpinnings and foundation of the Data Fabric is NetApp Clustered Data ONTAP. And with the latest release of cDOT 8.3.1, the technology has reached an important milestone to realize the single data landscape architecture.
Furthermore, I cannot recall at this moment of any storage vendor or cloud service provider adopting a philosophy like Data Fabric, which means that their customers would likely encounter hindrance of data as it moves through different premises or clouds. Just like water trapped in a watering hole, eventually it will dry up or become useless.
I am not trying deride the writer of the article, but instead of sensationalizing the NetApp story, perhaps it would be better to have a deeper understanding of where NetApp is now and where they are going. From the outside, they looked to be going through a rough patch right now, but as an ex-employee, NetApp has always been my little engine that could.
The intend of my response in this blog is really to help everyone open up their eyes because it is all about a single and secure data architecture. Clustered Data ONTAP happens to be the technology that makes this happens.
Remember … Data will find its way. There is no stopping that.
VMware multi-tenant and SnapMirror destination, multi customers' filesystem too, no problem with multi AD and domain
Reliability. flexibility and multi tenant. we host 20 client virtual dc on our a200.
I scaled out our previous 2 node cdot cluster on the fly by adding cluster's switches and then the 2 node a200, after that data migration between fas 2554 and a200 was made non disruptively and on business time.
The full bundle is too expensive. It's needed to implement native replicas (i.e. snapmirror) and backup (i.e. snapvault) features
our system is very stable and reliable, of course it needs to be maintained and monitored, even in case of network switch failure a200 keeps to serve data, very important is the initial setup, so you have to focus on the final architecture.
very good
tech support is very responsive and effective to find solution to some issues, most of the issues can be resolved reading KBs
fas 2554, need to scle out with space and performances
initial setup maust be done by cli, storage space privisioning made by gui, good interaction with vmware with vsc
I'm the vendor team and storage administrator
I need to ask for it to my ceo
full bundle too expensive I.e. full licenses to implement native replicas and backups
starting from a fas 2554 it was the best solution
good deduplication and compression ratio
It is very user friendly. Someone in my position needs to be able to bring up and shut down the system quickly, efficiently, and shut it down if there's a power outage quickly and efficiently without having trouble. It also supports VMware, which is what we use; but we use the NetApp as our only filer.
I am trying to understand it more, so I can employ it better during high tense situations.
I have been able to manage the system easily myself since we got NetApp four years ago.
The Ilom's graceful shutdown feature is no longer there in the version that I have. I believe I'm using 7.0.x, using the FAS 2040 and also the FAS 2020. I don't know where to say it needs improvement because I'm just not that versed in it yet.
It is excellent in terms of stability. I've had no issues during the last six years that I've had NetApp. Just recently, on one system that's been out and had a lot of controversy about, we had a filer fail on us. We were able to get a filer the following day. It was excellent.
For what we do, I can have up to close to 120,000 separate widgets running simultaneously and delivering data to other systems; and everything works, no problem. I am currently trying to find out where we’re moving ahead from here.
Technical support is excellent.
I was involved in building it. I found it a little bit grueling to get my certification to build it, but I really can't speak to the NetApp filer documentation. The documentation that we use for it is different from what NetApp uses.
I didn't evaluate anything. That is done in the organization at higher levels than I am. I know that NetApp won the contract again, so they must be doing something right because we’re not going to give a contract to anybody for a bad product. Right now, I'm concentrating on our collapse-down strategy in which we're taking multiple systems and putting them all on one system. That's why I'm here. I'm curious to see how it's going to impact the filer: whether the filer is going to need to expand; whether we're going to be migrating to a new filer; and so on.
From a relations perspective, it makes us look better that we have the best foundation to run things that we can. It also provides cost savings because it has efficiencies we can gain with it.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
The performance is probably the most valuable feature. It allows us to meet our customer's needs, being able to provide that level of performance that they need for their workloads.
There's always going to be room for improvement. I don't really have anything sticking out that's a major pain point or something that it's not doing that I need it to do.
Anything that I might like to have seems to be happening already, whether it’s the price coming down, tracking performance, or higher capacities; that work is already getting done or it already has been done.
We're interested or excited in getting to 32-Gb fiber channel. With their new models, NetApp will be moving to 32-Gb fiber. That would potentially raise performance and or lower our port counts, simplifying or minimizing the amount of cables we need to put in places. It would be a nicety, to be able to clean things up and simplify. It’s something I’m looking forward to.
It seems to be rock solid. We've not had any issues with it at all.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
Since we've added the All Flash FAS, we have scaled up. We've added additional disk shelves; it seems to be growing just fine with us.
I don't think we've had to open up any cases, or needed any kind of tech support on it, other than working with our VAR setting it up.
I've contributed opinions regarding the decision to invest in the All Flash FAS.
We've been NetApp customers for quite a while, so we just kind of grew into it, from disk to flash cache, flash pool and then to all flash.
I was involved in the initial set up. It was very straightforward. Working with our partner, they tend to do a lot of the work on our behalf but it's still a pretty straightforward process. That were really no gotchas.
Before choosing this product, I did not evaluate other options.
The solution is great; the company is fantastic to work with. I cannot think of a bad experience that we've had with either the company and or the product itself. We've had issues but nothing that wasn't overcome and worked through and better in the long run for working through it with a good company like NetApp.
We're very pleased with it but then I guess we don't have a lot of experience with other things to maybe compare.
The most important criteria for me when selecting a vendor to work with is the support. Everybody's going to have issues with something, but being able to resolve or remediate any issues as quickly, seamlessly and as open as possible is very important to us.
The most valuable features are cost, performance and usability. NetApp’s really good with usability; to get it up and running quickly and usable.
We've been using for our internal cloud environments, for internal cloud storage. Response time's very fast. Capacity's very good. Performance is very good; it's quick.
We've only had it in production for about three months, so we don't have a lot of time with it. For what we're using it for, it's been fine. I don't know of any issues or anything that we need to do, that I would request additional features right now, aside from the scalability improvements I’ve mentioned.
I know we use external monitoring. There's some level of monitoring on the systems themselves, but we do use a lot of external monitoring, whether it's NetApp versus third party. I know with ONTAP 9, they're working on more monitoring capabilities and more features within the unit, but they don't have that yet. I would like to see more monitoring onboard, on the system, instead of having to throw another third-party system at it.
I've been a NetApp customer for quite a while, at least 12 to 13 years. Stability's never been an issue for any of our systems that I've been associated with; it's been very good. We haven't had any issues with those units, knock on wood, so far; it's been good.
Scalability has been OK. We've been scaling them vertically instead of more horizontally because you can only scale the FAS horizontally so far, so we've scaled out vertically.
I would like to see them improve its ability to scale vertically. With flash, you can only drive so many IOPS, the controllers can only handle so many IOPS. There's a limit; there's physics, a mathematical limit that they can do.
It's been a long time since I've actually called technical support with a case. I try not to call tech support. At my level, I usually need something like a third-level support. You call in, you have to say what your issue is, they can't help you and then they have to pass it to the next person and then usually it's third level. Usually, it's a third-level, advanced person that I would need to speak to.
They've been fine. Once you get to that level, someone that's knowledgeable, support's fine.
In this environment, we were using spinning disks. When we needed to expand capacity, that's when we decided to go with all flash, and NetApp made it very price competitive. They were trying to push those units, so it was worthwhile to get flash instead of more spinning disks.
NetApp's initial setup is very straightforward. It's very easy to get up and running within a day, as long as you have the cabling in place and the power, but that's outside of NetApp's control. Once you have that infrastructure in place and they come on site, it's very easy to get up and running within a day.
Before choosing the All Flash FAS, I also considered Hitachi. We chose NetApp because NetApp is in our internal cloud, and that's what we were expanding. We didn't see the need to switch vendors at that point. NetApp's easier than Hitachi HNAS to get up and running.
For my manager, price is the most important criteria when selecting a vendor to work with. NetApp's been very competitive with pricing over the last 2-3 years.
NetApp's features are easier, and the capabilities are a lot more advanced than Hitachi and other vendors that we look at. The software's much more mature than the other vendors. That's why I like NetApp. It's easy to use. It's easy to get down to what you want to do with it; the features and capabilities are there.
Everybody pretty much can do the same. The issue is how complicated it is to get to what you're really trying to do. That's the one thing that I've seen. NetApp does a good job. They're much more mature, as I’ve mentioned. It's easy to drill down to get to the data, get it set up and get it configured, and it works.
We've only been using it three months. We haven't hit any issues with it yet; I can't say that we won't, but I'm not expecting to.