The primary use case is enterprise storage for our email database system.
We have just been using on-premise. We are looking to move the workloads to the cloud, but right now it's just on-premise.
The primary use case is enterprise storage for our email database system.
We have just been using on-premise. We are looking to move the workloads to the cloud, but right now it's just on-premise.
From an operations standpoint, we pretty much set it and forget it. We don't have to manage anything because of the AFF speed and low latencies. Because a big requirement in the healthcare industry are the low latency type response times, It has been perfect.
With the thin provisioning, we can overprovision our boxes, but there are still applications which are storage capacity hogs. So, we still have to report.
It simplifies our IT operations and makes them more efficient.
The most valuable feature is it's fast. We do not use the solution for artificial intelligence or machine learning applications, but our overall latency is low. With our SQL Servers and Oracle servers, compared to the older meta filers, like 7-mode, the 8000 custom mode, or performance on Pure flash systems, you can't compare. We are seeing submillisecond, which is pretty nice.
The solution has enabled us to move large amounts of data from one data center to another (on-premise) without interruption to the business using SnapMirror.
The solution has improved application response time. Compared to the 3250s and 8000s, it has been night and day.
We would like to have NVMe on FabricPool working because it broke our backups. We enabled FabricPool to do the tiering from our AFFs to our Webscale but it sort of broke our Cobalt backups. I think they're going to fix it in v9.7.
The SnapDrive is just another piece of software which is used to manage the storage on the filers. They could use some updates.
We are still a lot of things that we have to think about, like storage and attributes, to be able to go ahead with it.
We haven't gone to their standard Snaps product yet, but that's supposed to centralize everything. Right now, we have to manage individual hosts that connect to the stores. That's sort of a pain.
We've been using NetApp for the last 15 years.
So far, the stability is good. It's great.
For the AFFs, I haven't had any problems with the scalability. We went from two to six nodes without a problem.
It helped us easily move about 10 petabytes of data from San Diego to Phoenix.
The technical support has been awesome. Whenever we have a problem, we just give NetApp's support a call, and they fix our issue.
With the newer versions, we have needed less support. The solution has just been working.
We didn't switch over. We have been using NetApp for 15 years.
This solution has reduced our data center costs because when we went from the 8000 and 3200 series that took us from 20 racks of storage down to two.
The initial setup was straightforward. We've been deploying NetApps for the last 15 years. We are pretty familiar with the boxes.
I've been using the technology for years. For every model and version, the deployment is basically the same.
My team did the deployment.
We use a private cloud, which is Wesco, and it definitely saves us a lot of space.
The pricing is good.
We did go through the whole vetting out process of scoring different vendors and NetApp won, when we went through a Greenfield environment.
Check out the AFF. It is super fast and reliable. We've been using it for a long time. It's the perfect system for us.
I would rate the solution as an eight out of 10 because there's always room for improvement. To make it a 10, it would have to have super submillisecond performance at a cheaper price. It is about latency in our environment. We want submillisecond for everything across the board. If something can guarantee that performance all the time without increasing costs, that would be cool.
We use it for our EHR. We have 4,000 users who need to have access to a very large EHR called Epic. We are sharing a cache database through AIX servers.
It made everything faster. The user performance went from about eight seconds, for certain screens, down to three seconds per screen. That was the primary reason. Our users can multitask faster. The way Epic works is that you have multiple screens up at the same time. When you have multiple screens up at the same time and you have a patient sitting in front of you, speed is quality. Where before, the patient would have to wait for answers, now they get them almost instantaneously. Our users can run multiple things at the same time. For the users, the nurses and doctors, it is faster. All around faster.
As for IT's ability to support new business initiatives as a result of using this product, we are upgrading to Epic 2018 next year. The older system couldn't have supported it. That is another reason we went to a faster system. Epic has very high standards to make sure that, if you buy the upgrade, you will be able to support the upgrade. They advised me, top to bottom, make sure you can do it. Our new system passed everything. It's way faster.
We have VMs and we're were running VDI. We're running VMware Horizon View. We have about 900 VMs running on it and we have about another 400 Hyper-V servers running on it. Our footprint is very tiny now versus before. We now have some 30 servers running 1,000 machines where we used to have 1,000 machines running 1,000 machines. We have Exchange, SQL, and Oracle and huge databases running out of it with no problem at all, including Epic. It's full but it's very fast.
It takes us a minute or two minutes to set up and provision enterprise applications using the product. We can spin up a VM in about 30 seconds and have SQL up and running, for the DBAs to go in and do their work, in about two minutes.
It would primarily be speed. That's why we got it. Storage is costly but it's very, very fast. Very efficient, very fast.
Zero downtime so far. We've had it for two years.
We have not had to scale it. We bought it at about 128 terabytes and, right now, we are probably at about 80 or 90. Because of the upgrade, next year we are going to grow 30 percent. We will probably upgrade in 2020 or increase the space.
Zero downtime, so we've never really called. The engineer who supports it will call for firmware upgrades or for a yellow light: "Why is it on?" For the most part, we haven't had any issues with it at all.
We were on a standard NetApp but we upgraded to the FAS because of performance. We had it in for a test and it succeeded. That's why we bought it.
I have been with the company for 20 years and we have had NetApp for 20 years. We did switch over to IBM, about ten years ago, right before we went to Epic. But Epic said, "No IBM. NetApp." We were switching from NetApp to IBM, because IBM had a little bit of advantage, a long time ago. Then Epic came in and said, "No, switch back." So, we're back.
We have clusters but our guy doesn't know how to do the cluster side of things. That's what the reseller did, primarily.
We used a reseller, IAS. They have helped us. Our experience with them is good. We have had them for 20 years.
The benefit of getting the product, versus not getting the product, has allowed the clinic to do more. Since they are doing more, the return on investment is shrinking. We bought it two years ago and we have probably already paid for it.
The old NetApp we had was paid for. The new NetApp was about $3 million and we paid for that in about two years. It was well worth it because we can do more. For example, our advanced imaging is all pictures, videos; huge amounts of data get used up. Now they can triple and quadruple the amount they could do because of the speed. So instead of seeing ten patients a day, they're seeing 30 or 40 patients a day.
The total cost, the pricing of it, has gone up quite a bit.
Dell EMC. We looked at them briefly when they were EMC. We looked at IBM. But Epic pretty much says that NetApp sets the standard and we have to follow that.
If you have the money, you can't compare it to what we had at all, you just can't. In fact, the one that we had for production for the entire clinic is now sitting in our DR as cold storage. It went from state of the art to boat-anchor in about two years.
Ease of use: We're familiar with the NetApp platform and ONTAP. We're comfortable with the tool sets that it has. We've been trained on it as a group for quite sometime. We started out with IBM-branded NetApp with 7-mode. We've grown from 7-mode all the way into ONTAP 9.0. The cross training amongst players or team members allows us to help each other with issues that we deal with on a regular basis. We find that there's a lot of value in that.
We use it for a storage location for Riverbed centralized storage. We use it for VMware, VMFS volumes, and for our VMware platform. We also use it for iSCSI and for regular RDM server storage. We use it primarily for block-related storage.
We use it for multiple apps. It's enterprise-wide. We have eMARs. We have what they call the Obamacare Exchange running on it, and HBE for the State of Kentucky. We have a lot of VMware running on it, which have 1000s of servers that their VMDK files are nested in VMFS volumes which run on the AFF8080.
One of the primary reasons that we went with the AFF was because of the dedupe, the compression, and that it's not software-based, but it's hardware-based. It's inline.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
With the compression and dedupe, it's not necessarily a one-to-one gigabyte for gigabyte, where the compression and the dedupe allow you to buy a lot less, but to obtain a lot more storage capacity at the same time, hence getting the performance of SSD but they are not impacted by the two components of dedupe and compression. In summary, they don't get in the way of the performance of the product.
I would like to see a little more integration with some of the core fundamental components of OCI as part of the ONTAP OnCommand Manager, instead of it just being either all OnCommand Unified Manager or being able to see OCI and all of that it does. With what we pay for a node-pair and the OnCommand Unified manager, there ought to be at least a third of that integration in performance monitoring and alerting, and there's a lot there, don't get me wrong. We've got all the alerting and everything, but there should be a little more of the OCI bundled into the OnCommand Unified Manager.
In future versions, since we own every license that NetApp has except SnapLock. I would like to see SnapLock integrated into the platform, and not be an additional cost for a license.
We had every license when we purchased our platform. We're a major player in NetApp when you consider our total platform, as far as all the data that we manage is around about 12.5 to 13 petabytes. When you consider the size of our investment into NetApp, whether it's the AltaVault storage grid, E-Series 2800, FAS8060, 8080, or the AFF8700, we have a substantial purchase into all of their products at both the Commonwealth datacenter and also the alternate datacenter. When you consider we own every license that they have, except the SnapLock, and that's the one that we need the most right now for our stakeholders' legal purposes.
It's pretty good overall. With the auto-supports and the support SEs which are on staff when stuff goes bad and we have bad hard drives, we found that it's a pretty stable platform.
Also, all storage platforms have issues. There's things that go wrong with all storage platforms. There's no magic platform out there, but the response of the NetApp support staff, engineering, the ticketing, and the people whom help when you call in a ticket, they're very responsive and that also has a great value.
It's very scalable. Right now, at our primary site, we have four FAS8060 nodes. We have two node-pairs of 8080, and we're adding an additional node-pair of 8080 along with a node-pair of A700. At an alternate data site, we've got a node-pair of 8060 and a node-pair of 8080. We're adding a node-pair of 80200. For the upgrade at the primary site, the only portion of that would be considered risky is it has to go through change control when replacing the intercluster switch. Because we're expanding beyond the capacity of the original switch that we purchased, and it's very scalable, and we like the product.
They're always very good. Whether I contact them online or whether I call in, they're very diligent in following up and making sure issues have been resolved before they close the ticket.
Learn about the benefits of NVMe, NVME-oF and SCM. Read New Frontiers in Solid-State Storage.
We have multiple platforms. We have EMC, VNX7600s, and we just got rid of a VNX5600 and 5400 that were not able to keep up with the compute for what we were driving through them. We had on one of those systems, the VNX5600, we had 250 terabytes of free space that couldn't be utilized because the processing power on the platform couldn't keep up with what we needed. It was over-utilized, therefore we went with NetApp because it has the ability to handle the load that we throw at it.
I was involved in the initial setup. It was somewhat complex, because we did cutover from 7-mode, where we stood up a brand new platform, were having to move the data from one to the other, and were dealing with the outages that were involved, but also going from the seven-mode to the ONTAP and the clustering and how it is different.
I also do a lot of the infrastructure, as far as the fabric management, the ports, the trunks, and the fiber-connections from the NetApp platform or the NetApp cluster to the IBM Brocade Branded Directors. I do all of it: the zoning and the fabric management. It's very detailed and very complex. You have to really know what you're doing in order to get that set up properly. It is not on NetApp. That's just in general. If it was any system, you would have that to deal with.
Every time we go through an upgrade process or we have a new purchase, we look at what functionality is offered by each vendor/manufacturer and we don't purchase based on fidelity to a single vendor. It has to be based on:
We just got finished purchasing a new node-pair of 8080, AFF8700, and an 8200. If Unity would have come in at a comparable price, we could have gone with them. We didn't simply because of the scalability of the product.
Look for these three major components when researching a similar product:
As far as AFF, we've had far better response and longevity of the actual drives themselves because they don't wear out as fast as a spindle drive does. I would say don't go with spindle. Go with All Flash unless it's archive.
Most important criteria when selecting a vendor:
It has sped up everything through the all-flash storage. Everything is faster than it used to be. Everybody can access their VDIs fast and get to their servers fast. There is probably a 5x or 10x faster speed, but I am not sure. It is just quick. Nobody is waiting for things anymore.
The switch to the work-from-home model because of COVID was the key challenge that our business wanted to address. Before COVID, it was all in the office, and then after COVID, everyone was working from home. We wanted to scale the VDIs easily and spin up VMs that people can use on a day-to-day basis.
All-flash storage has definitely delivered the most value to our organization. We have a large VDI deployment, and there is now no wait time when they are booting up. Everything is quick. Everything builds fast. I would give it a ten out of ten. It is easy to use. It just runs. I never have to touch it. Without it, we would probably not have been able to grow our VDI deployment. The compression and dedupe required would not have been possible without it.
I would like to improve the ransomware aspect. We get a lot of false positives, and there are no details of what is happening. This seems to be already fixed in the new version.
Previously, we have used Hitachi and other lesser-known solid-state storage vendors. I do not know the reason for choosing NetApp at the time because I was not there for that decision.
I am familiar with other solutions. The advantage of the NetApp solution is that it is easy to use. SnapMirror is very nice and easy to set up. It is easy. When we push in a volume, it is automatically replicated to our DR site. It is touch and go. We can do it with one click.
We have been working with NetApp for the last five to eight years. We have not evaluated any other solution. We have been happy with them, so we stuck with them.
We definitely like the anti-ransomware capability. That is cool to have. I am excited to go to the new version where we also have fewer false positives. There is all-new reporting which is cool, so I will have to look at how to do more in-depth reporting than what we do now.
Data is always growing. We will see where our usage goes. We have had the biggest impact by going to the all-flash storage. We just purchased a C800, so that would be around for the next couple of years. We do not have any goals for our next technology investments over the next couple of years.
We are not yet too big on AI. It would be exciting to try to use some of the AI features. I want to see how that works.
I would rate this solution an eight out of ten.
Its use cases include everything from high bandwidth to low latency, AI workloads based on NVMe drives, and all the way to our basic home directories and what I call common plop-and-drop drives for the teams.
The challenge that we were trying to address by implementing NetApp AFF was that we needed truly high-speed storage to feed the GPUs for AI/ML workloads. We also had the financial responsibility of being able to lower the QoS when we just needed basic storage rather than Pure high-performance storage.
NetApp AFF has helped with faster data, and at the same time, we are able to work with our solutions team to set up FlexCache share so that we can more easily set up data pipelines and data life cycles. We can also integrate with our corporate systems for replication.
NetApp AFF has helped to simplify our infrastructure while still getting very high performance for our business-critical applications. The flexibility to keep everything on superfast NVMe but also tweak the QoS has allowed us to centralize more of our storage services. We need less rack space. We are using Keystone for financial responsibility. We have centralized and standardized a lot of our ITOps.
NetApp AFF has not helped to reduce support issues, such as performance-tuning and troubleshooting, because we have not had any issues yet that we had to take a look at, and I hope we do not.
NetApp AFF has definitely helped to reduce our operational latency. Especially with the speeds of the drives and the network links and the network topology that we are able to put together, for not just huge dense workloads, we are able to scale out horizontally so everyone can get the same speed.
NetApp AFF has not saved us much cost, but the Keystone model that we are able to run AFF in partnership with has helped to save costs. Instead of making those huge capital purchases where we may purchase 500 terabytes and not use it, the consumption-based model has allowed us to be flexible. It gives us that financial flexibility to say, "We want to experiment with this more. Add it on." We can also say, "We do not need it. Take it back, and give us that plug-and-play option."
The ease of use for setting up our basic shares such as NFS and CIFS is valuable. It takes a couple of clicks to set up things like object shares.
The ONTAP S3 implementation is not feature-complete as compared to StorageGRID. We had to move our lakeFS instance from ONTAP S3 based on AFF to StorageGRID.
The lab that I am developing has used NetApp AFF and NetApp storage for about two years, but I know that our organization, in general, has been using NetApp for storage for a long time.
I have not thought about it. It must be good because I have not had to think about it.
Based on your current needs and based on your inter-cluster switches, you need more storage added in, and you are good to go. You can create new aggregates and SVMs, and you are good to go.
The support is great. We have a dedicated team. I can work with our dedicated embedded professional services group. If it is a larger issue, I can send a message to our support ops engineer and get an answer right away, or even proactively.
Positive
This lab is brand new, we started with NetApp AFF.
I am a nerd at heart, so I worked with our professional services group to do the rack and stack. It was pretty straightforward. It was based on the idea of centralized controllers with expanded disk shares. We were able to work with our professional services consultant to get it set up in two days or so.
We were able to have those huge savings as our lab was being stood up, and now, as our usage increases, our cost increases, and as our usage decreases, our cost decreases. We have been able to see that trend match up with how we are using it.
We did not evaluate other options because it is part of a centralized storage offering with our company. We wanted to keep everything on the same level for ease of use for purchasing, operations, shared ownership, and everything else.
In terms of using other NetApp solutions or services, we use less of NetApp Cloud Services, but we do use Cloud Volumes ONTAP. We also use SnapMirror and FlexCache for a lot of the intra or inter-site capabilities.
I would rate NetApp AFF a 10 out of 10.
We primarily use the solution for databases, including Oracle, SQL, PostgreSQL, and VMware.
We're moving some data warehouses over as well as our main financial system.
The NVMe flash cache is the most useful feature. It lowers transactional speed even more.
We have found the ease of use to be excellent. Everybody's got expertise in it.
AFF helped reduce our operational latency. Since we started using it, we've improved by 20%.
AFF has helped us optimize our costs. We partnered with StorageGRID on that, and so we tier our data with StorageGRID and use AFF for the hot data, and then we tier it off to StorageGRID, which is really helping with that.
I do not have any notes for areas of improvement.
There's a lag with StorageGRID. It's off of this tier-three disc. After a few days, we sluff it off to StorageGRID, and then if all of a sudden, they need to restore that data, it takes a while to spin it back up and write it back to that. What would be great is if they could actually make StorageGRID so that it's pretty fast and has a fast recall. That being said, that's a recovery issue.
In the past, NetApp designed it so that you have a 70% threshold. You would never fill up past 70% since you need to have that room available. Whereas with Pure, I can fill it up to 110% of what they listed and it's still going at full speed. NetApp can't do that. They need to build in more capacity to ensure users don't lose 30% of a buffer off the top.
I've been using the solution for six years.
The stability is fantastic. They're really coming as close to a high availability system as you can get.
In the past, with the controller failover, you'd have to rely on the other controller. It was a little bit hit or miss. AFF has really stepped it up to where I'm not lagging on performance when it fails over if it's an upgrade, update, or something like that. I don't have to worry as much about controller failure anymore.
Scalability is great. It's just expensive. That's why we would go with StorageGRID. Due to supply chain issues, I already know that these flash drives are so expensive. We're paying through the roof for those drives even on a discount. Therefore, while scalability's great, we can't really afford it. I can't go and buy a $4 million system.
Technical support is pretty good. It is hit or miss. For the most part, it's good.
The main complaints I get from the engineers are that they'll just say, "it's a future release, and that future release is just too far down the road, and we need to get that done right away." Whereas we see a pain point now, and we would like to see them fix our problems right now. That said, we understand we're not the biggest customer on planet earth.
Neutral
Before AFF, we used Hitachi. We switched to simplify from the fiber channel over to NAS. We were looking to simplify and make the network the cost point instead of having fibre channel expertise and network expertise.
I was not involved in the initial setup of the solution.
We've probably optimized our costs by 70%.
We have seen ROI in terms of less latency on applications and users being able to get more done more quickly. The experience is really good with StorageGRID unless you're doing restores, and then they've got to restore that data. That's the only thing that's lagging. That said, the return on investment has been great since the DBAs and the other customers get more done and get more cycles accomplished with that enhanced IOP performance.
The pricing is palatable; we can swallow it. We're a longtime customer and we view our relationship as a partnership, not just a one-time deal. They have taken good care of us.
We looked at Dell, Pure, and EMC, among other options.
I like Pure. Pure has very low-cost copies of point-in-time databases that they can spin up immediately, and the developers, the database administrators, can have that hanging off the same disc at a low cost. It's just built off of the existing data, and I haven't seen NetApp come up with anything like that yet.
The Snapshotting, SnapMirror, SnapVault technologies, and just having all of those technologies, are really nice so that we can get a copy, SnapMirror, for example, in the data center, and we can have that spun up really quick. That's NetApp's technology and that's the advantage there.
I have not used BlueX, their cloud management aspect.
We haven't seen any ransomware attacks. Security's pretty closed off. They're not going to tell us if something happens, so it's hard to gain visibility. We'll just know that we've got to do a restore or something. That said, we haven't lost anything.
We do not use any other NetApp cloud services. We just use StorageGRID and the AFF right now. FSX looks intriguing. We'd be willing to test it in the future.
I'd rate the solution nine out of ten. It's a good product.
We use it for NFS and CIFS to structure data. We have about a couple of petabytes of all-flash.
Some of the volumes for our response times were 30 to 40 millisecond. When we move to all-flash, our response times were reduced to microseconds. There was a tremendous improvement. In terms of the dedupe and compression, it is squeezing the physical size where we are now seeing an 80 percent reduction, which is very positive.
The solution has affected IT’s ability to positively support new business initiatives.
It has improved performance for our enterprise applications, data analytics, and VMs. These improvements are a result of all-flash, throughput, reliability, compression, etc.
One of the features that I am looking for, which is already in the works, is to be able to take my code and automatically move it to the cloud. I believe this is coming out in version 9.4.
We have been running it for two to three years. It hasn't gone down yet. It can't get anymore reliable than that.
Thanks to dedupe, our physical footprint is quite a lot. All the scalability that we have done, we have so far done it within our organization. We haven't expanded it physically yet.
Since the product hasn't gone down in three year, there hasn't been a need to contact technical support.
The initial setup was straightforward. Nothing to it. The professional services from NetApp came in to help us out, and they knew their stuff.
We used NetApp for the deployment and our own resources. The experience was very positive.
The vendors on our shortlist were Oracle, Dell EMC, and Hitachi.
We chose NetApp because we were already using it, which make things simple, and its pricing. Also, some of NetApp's features are dominant in the market versus its competitors.
With all-flash, you can never go wrong. I am in the process of converting everything to all-flash.
We are not currently connected to the public clouds. We are looking to connect to them in 2019.
It takes us days to setup and provision enterprise applications using this solution.
We chose this solution because vendors are choosing all-flash over hybrid.
NetApp has sped up delivery for our virtual environment. It's been a great solution. Our customers have been amazed and overjoyed with the performance of our virtual environment, which uses VDI for virtual desktops and VMware for our virtual host servers. Everything is working seamlessly.
We plan to expand our use of NetApp solutions, and we're already in talks about this. We will be looking at the new ASA products they just released to see if they're more suitable for us than the C-series and what the cost is. We would love to have flash storage, but we'll see how much that costs.
A Flash environment would be great. We still have some spinning disk storage, and we hope to get rid of those completely and go all SSD. So we're looking for that to be within our budget
The NetApp C-Series storage soutions are the most valuable to us. I would rate them 8.5 out of 10.
Customer support is a hot-button issue, so we definitely need better customer support. We get some support from our vendor that helps. If the C-Series had a more user-friendly GUI, that would help us get our LUNs built and data storage connected faster.
We've been with NetApp for years. We considered switching to HPE Nimble Storage, but we decided to stay with NetApp and upgrade our environment to the C-Series. With NetApp, support has been a pain point, but the price and warranty costs are perfect for our budget.
NetApp is offered at a reasonable price point, so it has saved us money. Our primary goal is to keep our costs low. We're hoping that the next line of products will include a more budget-friendly option.
I rate NetApp solutions 8.5 out of 10. Improving support would increase the rating. They could also enhance and streamline the GUI to make management more efficient.