I use the solution in my company for large files. With Dell PowerScale (Isilon), the amount of files my company deals with is humungous in size.
Group manager at a government with 5,001-10,000 employees
Offers reliability along with an easy deployment phase
Pros and Cons
- "The tool's most valuable features are high density and scalability."
- "The tool's support team and the way the cases are handled by the product's technical support team are both areas of concern where improvements are required."
What is our primary use case?
How has it helped my organization?
My company has only four people working on the Dell PowerScale (Isilon) platform, which has around 300 TB. With the number of people necessary to run the tool, I am not sure if it can be considered an effective product.
What is most valuable?
The most valuable features are high density and scalability. The high density and scalability features are important for my company since we are trying to have low electricity and power consumption, considering that the amount of data that we have to use is limited.
My company uses an on-premises deployment model for Dell PowerScale (Isilon).
No one is required to maintain the product, but since it is a huge platform in our company's environment, some things need to be replaced constantly. The aforementioned issues with the tool are not because the product or hardware is bad but something that happens since it functions in a large environment.
If I consider my assessment of Dell PowerScale's ability to interface with AI models and algorithms, I would say that my company has just started using AI after connecting it to the tool. I use the AI part a lot currently. So far, my company has only been using NFS to connect to our computers.
It can be difficult to assess the ability of Dell PowerScale to help our company manage and run its storage from any location since it is a huge platform. You can put the product anywhere, but I would say that it needs to be in big data centers. You can have one location or ten locations, but all of them need to be big locations. You can't put the product into a small office area. It is easy to manage the product from multiple locations. Generally, one can manage all locations using the product from a single point.
Dell PowerScale (Isilon) has helped a little to reduce or eliminate data silos, but all the silos we use in our company are huge, making it a hard process. My company needs to deal with silos anyways.
In terms of the flexibility of Dell PowerScale (Isilon) for supporting various data workloads, while keeping them protected, I would say everything is good since they can scale up very fast with NVMe technology and disks and also with the archive data in disks along with the performance it offers. For protection, we use snapshots along with mirroring.
What needs improvement?
The tool's support team and the way the cases are handled by the product's technical support team are both areas of concern where improvements are required.
When replacing hardware parts of the tool, there are about 15 steps to follow before our company gets the parts sent over to us by Dell, making it an area that we feel consists of a lot of unnecessary work for our employees. Dell should streamline the aforementioned process and make it just one or two steps instead of making users go through 15 steps.
The support offered after installing the tool to deal with the problems associated with the tool could be a little bit better. After the installation phase, the tool's support services have certain shortcomings because our company is not allowed to use or send the automated error reports back to Dell. The aforementioned process needs to be manually handled by Dell and us, which is why it takes so long, even though we are eligible to get help from a dedicated support engineer. Processing logs and other stuff takes time.
Buyer's Guide
Dell PowerScale (Isilon)
November 2024
Learn what your peers think about Dell PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,636 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Dell PowerScale (Isilon) for seven years. My company is a customer of Dell.
What do I think about the stability of the solution?
The stability and reliability of the product are good. My company has only faced two big errors stopping the production in our organization over a span of seven years.
What do I think about the scalability of the solution?
How are customer service and support?
During the installation phase, Dell's support services are top-notch. I rate the technical support a six or seven out of ten. The support services during the tool's installation phase can be rated as ten out of ten.
How would you rate customer service and support?
Positive
How was the initial setup?
My experience with the product's deployment phase has been good.
The installation took two days for three racks, and going from zero to one hundred, it took two days. It is a simple step-by-step process that includes the physical installation and configuration. Our company is able to be up and running within two or three days.
Compared to the other vendors in the market, the setup phase of Dell PowerScale (Isilon) is fast. Other products are difficult to handle when it comes to the setup process. It is actually the technology like SAN and other related technologies that make it harder for a person to deal with the setup process.
What about the implementation team?
My company opts for a 50-50 approach, meaning we take help from Dell's experts and from our own employees.
What was our ROI?
Considering my position at the company, I would say that we do not indulge in backtracking of anything associated with Dell PowerScale (Isilon).
What's my experience with pricing, setup cost, and licensing?
It is easy for my company to deal with product costs and licensing. Dealing with the product costs and licensing areas is getting easier since Dell is simplifying the licensing process and licensing packages offered to users.
Which other solutions did I evaluate?
My company uses products from multiple vendors all the time. My company is required to deal with multiple vendors, so sometimes we use Dell and others.
If I compare Dell PowerScale (Isilon) to other vendors in terms of both pros and cons, I would say that its pros are associated with the high scalability, where it is possible for one to build it in a very huge and large manner, but that has been so from ten years ago. Now, other vendors are coming out with good scalability features. I am soon going to have a one-on-one session later this week with Dell's experts.
What other advice do I have?
I don't envision the future of our company's containerized solutions in terms of cloud integration since it is not applicable as we don't use the cloud, but we do use containers.
Based on key factors and the decision-making process, my company only uses the on-premises environment for our containerized applications.
I rate the solution a nine to ten out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Jun 4, 2024
Flag as inappropriateSenior Technical Consultant at Amplus
Addresses the customer's need for a global rather than discrete file system
What is our primary use case?
We use the solution to organize the data structure. Some of its applications are geared towards companies in the oil and gas sector. For instance, it supports SIP solutions that conduct scanning and comprehensive Seismographic analysis. Additionally, other customers include broadcast companies with vast historical assets. Essentially, they aim to manage their content libraries efficiently. It primarily focuses on data management and storage solutions.
How has it helped my organization?
PowerScale addresses the customer's need for a global rather than discrete file system. It resolves performance issues and offers comprehensive support. PowerScale needs more expansion regarding solutions such as HSM or integration with tape libraries.
What is most valuable?
Dell has pairing and utilizes optical services within the same infrastructure. This means utilizing services from the same infrastructure for internal file system needs and providing access to the public.
What needs improvement?
The solution should improve its pricing and features.
For how long have I used the solution?
I have been using Dell PowerScale (Isilon) as a consultant and reseller for seven years.
What do I think about the stability of the solution?
The product is stable.
What do I think about the scalability of the solution?
The solution is scalable and is suitable for enterprise customers.
How are customer service and support?
Support is very good.
How would you rate customer service and support?
Positive
What's my experience with pricing, setup cost, and licensing?
IBM is cheaper than Dell PowerScale.
What other advice do I have?
The maintenance depends on the time you are willing to invest in learning about the platform. It varies for each individual, and if you have people eager to learn, it can make a significant difference.
IBM built its sources of disk management which control costs. They don't rely on purchasing from vendors. For example, Dell PowerScale doesn't manufacture the disks; instead, they source them from suppliers or engage in patching. They do not produce the disks themselves; they procure them.
IBM can utilize gateways that offer a similar file system to PowerScale. These gateways provide both block storage and file services. This is different from PowerScale because when purchasing PowerScale, you acquire building blocks including CPU and memory. This configuration lacks the flexibility to adapt to various infrastructures. While this setup can be configured, it may pose limitations.
You can customize security settings within the tool, including access and file-level permissions. This focuses on enabling 'write once' capabilities, making it challenging to alter data without appropriate authorization. It would be impossible to tamper with unless an individual gains access by obtaining administrator credentials.
Overall, I rate the solution an eight out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Last updated: Mar 31, 2024
Flag as inappropriateBuyer's Guide
Dell PowerScale (Isilon)
November 2024
Learn what your peers think about Dell PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: November 2024.
816,636 professionals have used our research since 2012.
System Team Leader at Deakin University
As you add more nodes in a cluster, you get more effective utilisation
Pros and Cons
- "The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time."
- "The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement."
What is our primary use case?
- Research data
- Departmental file shares
- Data centre storage: NFS
We have two data centres in our university. We have Cisco UCS, Pure Storage, and are heavily virtualised with VMware. PowerScale is our unstructured data storage platform. It provides scaled-out storage and our high-level NFS across applications. It also provides all the storage for our researchers and business areas, as well as students, on the network.
With the exception of block workloads, which is primarily VMware, Oracle Databases, etc., everything else it is on PowerScale. It definitely has allowed us to consolidate the ease of management.
How has it helped my organization?
With the quotas having fewer large pools of storage in the data centres, we typically only have one or two Isilon clusters. That gives us the ability to multi-tenant or allocate data to different applications and isolate workloads. It is very efficient when managing that volume of storage. We are not tuning it every day or week. The only time that we are really doing anything with it is if we're planning an upgrade of some sort several times a year. Outside of that, it just does what we want it to do.
We automate the vast majority of the things that we do on the Isilon clusters: provisioning of storage, allocation of storage, management of quotas wrapped into tens of thousands of students, and managing permissions. That's the level of support they have for their built-in API's, which is probably a huge game changer for us in the way that we manage the storage. It makes it far more efficient inside of PowerScale.
Compared to doing it manually, what we have been able to automate using the API is saving us at least tens of hours a month versus when we used to get service requests. We have even been able to delegate out to different areas. If we have an area with whom we do file shares, we delegate out the ability for them to create new shares and manage their permissions themselves.
The solution allows us to manage storage without managing RAID groups or migrating volumes between controllers. We see this in the big refresh that we did earlier in the year. After you have clicked the "Join" button and joined, you go to the old node and click remove, then wait for it to finish. You don't have to configure anything when you add new node types, they are automatically configured. You can tune them and override things if you want, but there is no configuration required.
PowerScale has enabled us to maximise the business value of our data and gain new insights from it. It gives us the ability to have our data stored and presented via whatever protocol is required. Now, we can look at all these different protocols without having to move or duplicate the data.
The solution allows you to focus on data management, rather than storage management, so you can get the most out of your data. We looked at the types of data that we have on the cluster, then we just target it based on the requirements. We don't have to worry about building up different capabilities, arrays, RAID types, etc. We just have the nodes, and through simple policy, can manage it as data rather than managing it as different RAID pools and capacity levels. If someone needs some data storage, then we ask what their requirements are and we just target based on that. Therefore, we manage it as a workload rather than a disk type.
What is most valuable?
Their SmartQuotas feature is probably the thing that we use most heavily and consistently. Because it is a scaled-out NAS product, you end up with clusters of multiple petabytes. This allows you to have quotas for people and present smaller chunks of storage to different users and applications, managing oversubscription very easily.
We use the policy-based file placement, so we have multiple pools of storage. We use the cold space file placement to place, e.g., less-frequently accessed or replicated data onto archive nodes and more high-performance research data onto our high-performance nodes. It is very easy to use and very straightforward.
The node pools give us the ability to non-disruptively replace the whole cluster. With our most recent Gen6 upgrade, we moved from the Gen5 nodes to the Gen6 nodes. In January this year, we ended up doing a full replacement of every component in the system. That included storage nodes, switching, etc., which we were able to replace non-disruptively and without any outages to our end users or applications.
We use the InsightIQ product, which they are now deprecating and moving into CloudIQ. The InsightIQ product has been very good. You can break down the cost performance right down to protocol latency by workstation. When we infrequently do have issues, we use it to track down those issues. It also has a very good file system reporting.
For maximising storage utilisation, it is very good. As you add more nodes in a cluster, you typically get more effective utilisation. It is incredibly flexible in that you can select different protection levels for different files, not necessarily for file systems or blocks of storage, but actually on a per file basis. Occasionally, if we have some data that is not important, we might need to use a lower protection. For other data that is important, we can increase that. However, we have been very happy with the utilisation.
Dell EMC keeps adding more features to the solution’s OneFS operating system. In terms of group work, we have used it for about 13 years. The core feature set rollup has largely stayed the same over that time. It has been greatly improved over that time as well. So, it has always been that storage NFS sandbox, and they've broadened their scope for NFS v4, SMB3 Multi-channel, etc. They are always bringing up newer protocols, such as S3. Typically, those new features, such as S3, don't require new licensing. They are just included, which is nice.
Over the years, the improvements to existing protocols have been important to us. When we first started using it, they were running open source sandbox for their SMB implementation under the covers and they used a built-in NFS server in a free VSD. Whereas, with the new implementations that they introduced for OneFS 7 have had huge increases in performance and been very good, though there's not necessarily any new features. We even use HDFS on the Isilons as well at the moment. The continued improvement has been really beneficial.
It is incredibly easy to use the solution for deploying and managing storage at the petabyte scale. With CIFS and IBM Spectrum Scale, there just isn't the horizontal concern. I couldn't think of an easier way to deploy Petabyte NAS storage than using Dell EMC PowerScale.
What needs improvement?
The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement.
There is a lack of object support, which they have only just rectified.
For how long have I used the solution?
About seven years.
What do I think about the stability of the solution?
The stability has been exceptional. I've been very happy with the stability of it. In the last six years, we have pretty much been disruption free. Prior to that, we have had one or two issues, which we worked with their support to fix.
We had a major refresh at the start of the year when we replaced one petabyte at one site and a half a petabyte at another site. This completely replaced everything and took us about a month. It was finished with one staff member overseeing the process, moving the data and roping in one or two other staff at different times to help with the physical backing.
They are quite heavy, so you always want to have two or three people involved. It has very minimal staff management required. For example, once the hardware is racked, it needs just one operator who joins the nodes, waiting for the data to move over. Internally, this is non-disruptive to the user.
Firing up the old nodes, that is more of a management thing.
What do I think about the scalability of the solution?
Pretty much everyone touches the solution in some way or another. It has been a bit different right now with COVID-19, since a lot of people have been recently working remotely. In any given day, probably 12,000 people have been using it. That is just going by the number of active connections that we have from staff, students, and researchers at any time.
We can't see anyway that we would ever reach the limits of the product in terms of scalability and our workloads. We have no concerns around scalability.
It has a back-end network that it's managing to get switches with enough ports to plug the nodes in, if you want to go big. That is the most complicated part, not the actual management of storage. As you add more nodes, that management overhead remains largely the same.
For larger scalability, I would be very comfortable with it. We would just have to do some good site planning to ensure that we have enough room for it.
Our usage is pretty extensive. It touches on almost every area of our organization. With the introduction object and support for Red Hat OpenShift, which they're releasing in OneFS 9.0, we are very keen to explore and extend the usage in those areas. That is part of the reason why we are upgrading our test cluster on OneFS 9.0 to specifically evaluate use with Red Hat OpenShift and Kubernetes in clouds. It definitely has a very strong place now in the data centre, and we don't see it going away anytime soon, as we see more workloads going onto it.
How are customer service and technical support?
The support has been mixed. If you get through to the right engineers, you can get problems resolved incredibly quickly. If you don't, you can go around in circles for a long time. We do typically have to escalate support tickets through account managers to get them positioned correctly. However, once that happens, issues are resolved pretty quickly and we're generally happy.
The technical support is average. There are certainly not the best that we have ever dealt with, but far from the worst ones. I would not recommend the product based on their tech support alone.
Which solution did I use previously and why did I switch?
Going back 13 years prior, we used to have a lot of Microsoft and Linux-based file servers all over the place. They were all siloed with a lot of wasted capacity. Consolidating all those down into a small handful of Isilon clusters has dramatically reduced the amount of silos that we have in the organization. In terms of reducing waste from having storage stuck in one silo or isolated area, it has made a huge improvement.
We have previously used IBM Spectrum, and I don't think you can buy anymore. Briefly, eight years ago, we moved a large portion of the workload off Isilon onto Spectrum. That was the biggest regret that I have had in my career. We couldn't get back on the Isilon fast enough. It was a commercial decision to move away from Isilon, which wasn't the cheapest. However, it was far more mature than the IBM product. Spectrum cost us so much that what we saved in capital expenditure we then lost in productivity, overhead, and maintenance. It was just a disaster. The support that we received from IBM was the worst support I have ever received. I've been in this industry and job for about 17 years now, and I have never had a worst support experience that I've had from IBM. It was a nightmare.
When we needed to get the issue with Spectrum fixed, there was no doubt about getting PowerScale. We couldn't get back on PowerScale fast enough. We just made that happen, and as soon as we did, all the fires were put out.
About 13 years ago, we were using six terabyte nodes back. Now, they're obviously a lot bigger than that. While scalability was definitely a key interest, the main driver for us was the ease of management to sort of consolidate all the separate file servers with their own operating systems and RAID arrays, and consolidating them into one pool of storage where we could allocate quotas and still manage capacity effectively, but centralize it and reduce waste. The ability to scale out was just icing on the cake, and definitely something we were very interested in. It's something we've utilised quite heavily over time, but the ease of management was the main driver.
How was the initial setup?
The initial setup has always been straightforward. The process of creating a new cluster is largely the same now as it was 13 years ago. You get your first node, then connect the serial port to it. You answer about 10 questions, then you're ready to go. The rest of the nodes are added by clicking a button. It's incredibly easy to set up, and it says a lot that the process has been the same for about 13 years. There's not really much to improve or simplify, because it is already incredibly simple.
Assuming the hardware was racked, you could have the cluster setup and your minimum three nodes joined within half an hour to 45 minutes.
The process of adding a node is very straightforward: It is pressing a button. This can take five minutes, then the process is complete. Once you have added new nodes, you can then remove old nodes.
Understand your workload. Make sure you size and cost it correctly for the amount of metadata you expect to see on it. Don't undersize your SSD.
For the whole replacement this year, I got one of our junior staff members, who had have never actually used our PowerScale, to do the whole upgrade process. I just pointed him in the right direction. Because it was very easy, he managed to do it without any issues.
What about the implementation team?
We don't use any professional services. We always do it in-house.
Two people are needed for racking hardware. Only one person is needed to deploy it, as that process is very straightforward.
What was our ROI?
The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time.
PowerScale has reduced the number of admins that we need. It has allowed our admins to focus on adding value through automating tasks and streamlining operations for our customers, rather than focusing on the day-to-day and tuning RAID profiles. We can use our APIs to automate workflows for customers and have quicker turnaround times.
What's my experience with pricing, setup cost, and licensing?
The solution is expensive; it is not the cheapest solution out there. If you look at it from a total cost of ownership perspective, then it is a very compelling solution. However, if you're looking at just dollar per terabyte and not looking at the big picture, then you could be distracted by the price. It is not an amazing price, but it's pretty good. It is also very good when you consider the total cost of ownership and ease of management.
We added on a deduplication license. That is the only thing that we have added. That was a decision where it was cheaper for us to license the deduplication than it was to buy more storage, so we went with that approach. We just did an analysis and found this was the case.
We haven't really hit a workload or situation that we have had any issues catering for. Certainly with the huge number of different node types now, we could position any sort of performance from very cheap, deep archive through to high performance, random workloads. I feel like we could respond very quickly to any business requirement that came up assuming they had budget. Even if we didn't have budget, largely with the way our clusters are configured, we typically mix in high and low performance. We won't buy top of the line, high performance, but we will buy basic H500 nodes, which are a large amount of self-spinning disks. That is what we standardize for our high performance tier.
Which other solutions did I evaluate?
13 years ago, it was called Isilon Systems. They were a start up in Seattle, while we are in Australia. We were importing the hardware directly. At that time, there was nothing really else that we were looking at. We were just caught up in revolutionising the way we would be managing one pool storage. Then, six to eight year ago, when we had that little stint on IBM Spectrum, we didn't go to market. We very heavily evaluated the IBM product and NetApp in cluster mode as an alternative. We did rule out NetApp from a management perspective as far too difficult to manage. The Spectrum product that we saw on paper and from our evaluation of loaned hardware seemed like it was going to be on par with Isilon. Little did we know the nightmare that would ensue from that.
The biggest lesson that we learned was from moving away from it onto the IBM product. The maturity of a product is very directly correlated to the amount of time you spend managing it, as it is a very mature product. We have been using it for 13 years, and the core has a very solid, mature foundation that has been built over that time.
We have dealt with Nimble Storage in the past. I would recommend Nimble Storage based on their support (at that time), as they had exceptional support. However, Dell EMC support is no worse than Cisco or any of the other vendors that we have had to deal with, but it is nothing special.
What other advice do I have?
Just don't underestimate how important a mature product is compared to something leading edge or new.
PowerScale's positioned primarily to receive the call within that data centre. We have PowerScale heavily centralized, both in our IT department and on our campuses. We don't really have any storage from PowerScale in the cloud or our edge because we have very good network connectivity. In terms of the right tiers of storage, the level of flexibility that we have for adding different types of storage with different characteristics to our existing cluster now is the best it's ever been in the 13 years that we've managed it.
Between CloudIQ and DataIQ, they're replacing their legacy InsightIQ product. We haven't moved to CloudIQ yet to start looking at it.
Early on, since we have been using the solution for 13 years, if you added a new node type, then you would have to add three physical nodes to start a new pool and only end up with 66 percent utilisation on that storage pool. Whereas, in the Gen6 hardware, you can have more smaller nodes in one rackmount chassis. Now, you can add a new storage type and gain much better storage efficiency off the bat.
The S3 protocol specifically comes in OneFS 9.0. We have a test cluster for it, which we are in the process of upgrading to have a look at their S3 support. However, I haven't used it yet. Typically, we use something like MinIO, which is an open source object gateway, and put that in front of the PowerScale cluster.
On the archive side, we still have the A200 nodes. While you can go with the A2000s or go deeper than that, we can manage pretty much anything thrown our way by not going too extreme in our pools by positioning data effectively. I think it's very good.
I would rate the solution as a nine out of 10.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Presales Solution Architect at DXC Technology
Highly scalable data management with a straightforward setup and advanced ransomware protection
Pros and Cons
- "Features like ransomware protection, file lock retention, and third-party integrations (e.g., Superna for ransomware protection) are significant benefits."
- "Improvements could be made on the object storage side."
What is our primary use case?
Our customers consist of banking sectors, financial sectors, infrastructure services, and service divisions, such as oil and gas-based customers. We recommend Dell PowerScale to our customers based on their business demands and data management needs. Primarily, it is used for handling large-scale data, including real-time data generation from remote locations, particularly in industries like automobile manufacturing, where vast amounts of unstructured data are gathered and processed.
How has it helped my organization?
Dell PowerScale helps our customers by providing a highly scalable and efficient data management solution that supports their operations effectively. It allows for the seamless addition of nodes to the cluster as needed, ensuring the organization's data storage can grow proportionately with their demands. PowerScale supports various use cases, from real-time data processing to maintaining data integrity and availability, which is crucial for sectors generating large volumes of data daily.
What is most valuable?
One of the most valuable features is erasure coding, which prevents data loss even if a single node fails. The OneFS OS provides a single namespace, improving data storage efficiency and management.
Features like ransomware protection, file lock retention, and third-party integrations (e.g., Superna for ransomware protection) are significant benefits.
Additionally, the scalability and capability to handle large amounts of unstructured data, real-time data processing, and integration with AI workloads make it a robust solution for enterprises.
What needs improvement?
Improvements could be made on the object storage side. PowerScale offers object storage capabilities, but not in a fully integrated way. If customers have workloads requiring both file and object storage, PowerScale can provide up to 20% object storage, which might not be sufficient for all needs. A unified solution combining NAS and object storage, similar to other solutions that combine block and NAS, would be beneficial.
For how long have I used the solution?
I have worked with Dell PowerScale for almost a year. Previously, I worked with the Isilon model before transitioning to PowerScale.
What do I think about the stability of the solution?
Dell PowerScale is very stable, with a stability rating of nine out of ten. In terms of availability and stability, I would give it a ten.
What do I think about the scalability of the solution?
Dell PowerScale is extremely scalable. You can start with a minimum of three nodes and expand based on capacity demands. This feature allows for significant flexibility in managing storage needs. The scalability is further enhanced by its ability to integrate new nodes automatically without manual effort.
How are customer service and support?
Support for Dell PowerScale is never a challenge. There is twenty-four-seven-three-sixty-five support available, including four-hour on-site support for high-priority issues. The on-site engineer will reach the location, review the issue, and work on it within four hours. Hence, I would rate technical support a ten.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup for PowerScale is straightforward and easy. It just requires initial configuration, followed by automated processes to complete the setup. Therefore, I would give it a rating of ten for ease of setup.
What's my experience with pricing, setup cost, and licensing?
The price of Dell PowerScale depends on the customer's specific requirements and capacity needs. It is generally considered a costly solution, but its features and capabilities justify the price. It's essential to align the pricing with the business needs and future scalability requirements.
What other advice do I have?
If your business demands a highly scalable data solution, including handling multiple edge locations and large amounts of data effectively, Dell PowerScale is a suitable option. It is particularly advantageous for enterprises with specific AI workloads. However, due to its cost, it may not be the best option for mid-range customers or those looking for general-purpose file services.
I'd rate the solution nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: reseller
Last updated: Sep 25, 2024
Flag as inappropriateSr. Storage & Backup Engineer at a retailer with 10,001+ employees
Great for creating multiple storage pools; nodes can be scaled without the requirement of extra clusters
Pros and Cons
- "Ability to scale the number of nodes without having to build additional clusters."
- "The UID mapping and how to configure mapping-related things is a struggle."
What is our primary use case?
We're using 95% of data for user access and 5% percent for the NFS mount point. We're a startup and customer of Dell.
What is most valuable?
It's helpful that we're able to scale the number of nodes without having to build additional clusters. We started with a very small footprint and now we have 30 nodes and recently expanded an additional eight nodes on the cluster. We can create multiple storage pools from this if we decide to add a location within the cluster itself.
What needs improvement?
We're struggling to find the NIXI protocol. It's for people needing to access using Windows and Linux. We're struggling with the UID mapping and how to configure mapping-related things. I'm looking at how to map those GIDs and UIDs.
For how long have I used the solution?
I've been using this solution for four years.
What do I think about the stability of the solution?
The solution is stable. If it's being used for the NAS protocol, it's very stable.
What do I think about the scalability of the solution?
The solution is very scalable.
How are customer service and support?
We have direct Dell support only.
How was the initial setup?
The initial setup is straightforward. We have 4,000 users in the company who are accessing the shared drive without any problems. Maintenance can be done by one person.
What's my experience with pricing, setup cost, and licensing?
We have a five-year contract with Dell. We get new hardware each time we renew the contract and the cost is calculated on a percentage-wise and scalability basis. Every five years, we replace the tech nodes.
What other advice do I have?
If you're looking for a product to use for an assembly protocol, this is the best solution on the market.
I rate this product nine out of 10.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Senior System Engineer at Cincinnati children's hospital
Data storage and management system that offers reliability and the ability to share data across multiple channels
Pros and Cons
- "PowerScale helped free up our employees' time to focus on other business priorities. There are now automated jobs such as backing up and replicating data, that reduce the footprint we have. Those types of tasks were previously done manually."
- "Additional metadata reporting would be great. We have to use a separate tool to report on that. We would like to view the age of data and how long it has been since someone has accessed a file."
What is our primary use case?
We use this solution to facilitate sharing data access across multiple platforms. We are a children's hospital and have a lot of PHI data that is critical to keep secure.
How has it helped my organization?
One of the benefits that we have seen from our research department is quotas and chargeback. They are able to control costs based on the projects that they're given and the grants that they receive from the state and federal levels. They are able to track the quotas and chargebacks, which is made possible through Isilon.
Implementing Isilon has removed the previous silos that existed between different teams. Everyone has been able to virtually separate their resources, but still store them physically on the same box.
PowerScale helped free up our employees' time to focus on other business priorities. There are now automated jobs such as backing up and replicating data, that reduce the footprint we have. Those types of tasks were previously done manually.
Isilon also makes it possible to delete large amounts of data and fix active directory permissions. Previously, we would have to create scripts and run them manually. It also reduced our risk of data loss and gave us the ability to recover from snapshots and replicated data.
What is most valuable?
We have data that is accessed from multiple OS from different models and in departments in our company. The ability to serve up that data to all those different platforms is very useful.
One of the best features of Isilon is its reliable performance and ability to report on its performance. Reliability is really important in our environment, with a 24/7 shop that serves patients. In many instances, data access is critical.
Prior to Isilon, we had to access data from multiple different platforms. This solution offers unified storage and the ability to consolidate and migrate data which was a big step forward. It allowed us to cut costs by eliminating multiple platforms, putting it all on one array.
What needs improvement?
Additional metadata reporting would be great. We have to use a separate tool to report on that. We would like to view the age of data and how long it has been since someone has accessed a file.
For how long have I used the solution?
I have used this solution for eight years.
What do I think about the stability of the solution?
This is a stable solution.
What do I think about the scalability of the solution?
This solution's scalability in an on-premise environment is impressive. We continue to throw large workloads at it and performance has been pretty stable. It has multiple nodes, which is useful when we have outages or code upgrades. We're still able to perform those without interruption of service.
How are customer service and support?
The EMC field support is great. They're easily accessible. We have a specific person we call which is invaluable. We are able to open tickets online instead of spending hours on the phone, no matter what day or time. The only challenge we sometimes experience is a language barrier.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup for this solution is complex. The F900 uses Dell PowerEdge Servers instead of the traditional nodes. We needed to disable memory allocation features on those servers. When we did that, with EMC support, it brought the cluster down and it was down for a couple of weeks.
The deployment involved a storage analyst, data center analyst, and EMC staff. The data center analyst handled the power requirements and cabling requirements. There are 15,000 users across multiple sites.
This solution requires three people to handle maintenance. Maintenance requires verifying whether jobs are successful, identifying failures, and ensuring that replication is occurring correctly. We do regular creation and deletion of shares, files, and folders.
What was our ROI?
We are able to better handle and reign in budgets by making departments responsible for the data that they are consuming for the grants that they get. The deduplication of data has freed up some of the storage costs that we've traditionally experienced. Some of the newer technology allows us to store more data on less equipment, which means that we're using less footprint in our data center.
What's my experience with pricing, setup cost, and licensing?
This solution is priced slightly higher than others on the market but does offer good quality. With this solution's data reduction and compression, we were able to purchase less. Costs have dropped because of the data rate of compression and deduplication.
Which other solutions did I evaluate?
We evaluated Pure Storage but their support was unreliable. We need fast and reliable support, and EMC has always proven that when we have an outage, they're there to help us.
What other advice do I have?
The user interface is very simple to use. Support is critical when deploying this solution. When we were deploying the F900, there were a lot of problems that were beyond our scope. We frequently needed to touch base with system engineers from EMC.
I would rate this solution a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
High-Performance Computing Services Manager at The New Zealand Institute for Plant and Food Research Limited
Simplified data management, tremendously reducing our users’ cognitive overhead
Pros and Cons
- "The most valuable feature we started using, beyond the initial scope for the solution, is the multi-protocol system that allows you to access the same set of files using different network protocols like NFS or SMB. PowerScale’s Unified Permission Model ensures that data security and access permissions are honoured regardless of whether the client is a Windows desktop or a Linux server"
- "The only thing that I think PowerScale could do better is improving the HTTP data access protocol. At the present, you cannot protect access to data via HTTP or HTTPS the same way that you can secure data access through other protocols like NFS or SMB[...]the Unified Permission Model that would allow a user to authenticate before being able to access a private file, does not apply."
What is our primary use case?
PowerScale (formerly Isilon) is effectively a giant NAS. We have two clusters, one for production workloads and one for Disaster Recovery and Business Continuity purposes. These clusters are installed in separate data-centers, physically located in two different places in the country. Both clusters were deployed at the same time when we first adopted the solution, and we have been growing them at an almost equal rate ever since.
Our production cluster is attached to our High-Performance Computing (HPC) environment, and this was the primary use case in the beginning: to provide scale-out storage for the Bioinformatics team, who do omics analysis on plant and seafood organisms that we do scientific research on. As time went on, we expanded our use of the platform for other user groups in the organization.
Eventually, PowerScale became the de-facto solution for anything related to unstructured data or file-based storage. Today, we also use the platform to host users’ home directories, large media files, and really any kind of data that doesn't really fit anywhere else, such as in a SharePoint library or a structured database. Nowadays, almost everyone in the organisation is a direct or indirect user of the platform. The bulk of the storage, however, continues to be consumed by our HPC environment, and Bioinformaticians continue to be our largest users. But we also have data scientists, system modellers, chemists, and machine-learning engineers, to name a few.
Our company has multiple sites throughout the country and overseas, with the two primary data-centers supporting our Head Office and most of the smaller sites. Some of these sites, however, have a need for local storage, so our DR/BCP PowerScale cluster receives replicated data from both our production cluster as well as these other file servers.
How has it helped my organization?
Before PowerScale we used to have a different EMC product. I believe it was VNX 5000, which is primarily a block storage array with some NAS functionality. We did not have a HPC environment, however we did have a group of servers that performed approximately the same function.
Back in those days, raw storage had to be partitioned into multiple LUNs, and presented as several independent block devices because of size limitations of the storage array. When one of these devices started to run out of space, it was extremely cumbersome and time-consuming to shift data away from it, which slowed down our science. We wanted a solution that would free our users from the overhead of all of that data wrangling. Isilon was a good fit because it enabled us to effectively consolidate five separate data stores into a single filesystem, providing a single point of entry to our data for all of our users.
PowerScale helped us consolidate our former block storage into a full-fledged, scale-out, file storage platform with great success. We then decided to expand our use cases further, replacing some of the ancillary Windows File Servers that provided network file shares in our Head Office. We now have a single platform for all our unstructured data needs at our main locations.
We have not explored using PowerScale cloud-enabling features yet, but it is in our roadmap. The fact that those features exist out of the box, and can be enabled as required is another reason the platform is so versatile.
The switch to PowerScale was transformative. Before we implemented it, users had to constantly move their data between different storage platforms, which was time consuming and a high barrier of entry for getting the most of our centralized compute. Distributed, parallel processing is challenging enough, to add data wrangling on top of it created massive cognitive overload. Scientists are always under pressure to deliver on time, and deadlines are unforgiving. The easier we can make leveraging technology for them, the better.
We officially launched our current HPC environment shortly after we introduced Isilon, supporting approximately 20 users. Today, that number has grown 17500% to over 350 users across all of our sites. In an organization with nearly 1,000 employees, that's more than a third of our workforce! I credit PowerScale as one of the critical factors responsible for that growth. PowerScale simplified data management because it allows you to present the same data via multiple different protocols (eg: SMB, NFS, FTP, HTTP, etc), tremendously reducing our users’ cognitive overhead.
Before adopting PowerScale, we also faced capacity constraints in our environment. I had to constantly ask end-users to clean up and remove files they no longer needed. Our block data stores were constantly sitting at around 90% utilization. Expanding the storage array was not only expensive: every time that we wanted to provision additional space we had to decide if it was justified to re-architect the environment versus adding yet another data store. And going with the later option meant going back to our users again to free up space before more capacity could be added. All of this wasted massive amounts of time, that could have otherwise been spent running jobs and doing science.
Once we introduced scale-out storage, capacity upgrades and expansion became straightforward. The procurement process was simplified because now we can easily project when we will hit 90% storage utilization, and our users have visibility of how much storage they are individually consuming thanks to accounting-only quotas, which help keeping storage usage down. PowerScale provides a lot of metrics out of the box, which are easy to navigate and visualize using InsightIQ, and most recently DataIQ.
I can certainly recommend PowerScale for mission-critical workloads, it is a powerful but simple platform with little administration overhead. We use it in production for a variety of use cases, and it would be hard for our organization to operate effectively without it.
What is most valuable?
When we selected Isilon as our preferred storage provider, many considerations came into play, but the deciding factor was how little administration it requires. We no longer need a dedicated storage administrator looking after it. Instead, our Systems Engineers can handle the day-to-day operations without requiring in-depth expertise in storage management. The simplicity of the solution was a strong selling point when we first started looking into it. For example, when you have replicated clusters, you must ensure that you can actually failover between them in the event of a disaster. PowerScale makes setting up and checking the status of replication schedules extremely simple.
Over time, we started using more and more of its capabilities. I believe the most valuable feature we started using, beyond the initial scope for the solution, is the multi-protocol system that allows you to access the same set of files using different network protocols like NFS or SMB. PowerScale’s Unified Permission Model ensures that data security and access permissions are honoured regardless of whether the client is a Windows desktop or a Linux server. Our users can now access the data they need for their research, without having to deal with multiple credentials depending on the environment they are using, or having to rely on specific clients. The same file can be opened and edited from Windows Explorer or from the Linux command line, and we can guarantee that the ownership and permissions of that file will remain consistent. It reduces friction and cognitive overhead, which is what I value the most.
Data security and availability are also included in solution, out-of-the-box. Of course you still need to be aware of how to configure the different features to your use case, but from a data security and availability perspective, you can leverage replication schedules, snapshotting, increased redundancy at rest, and all of those features which we now consider a must-have. With PowerScale, I can have piece of mind that if a specific directory needs to be protected, it will be protected.
What needs improvement?
The only thing that I think PowerScale could do better is improving the HTTP data access protocol. At the present, you cannot protect access to data via HTTP or HTTPS the same way that you can secure data access through other protocols like NFS or SMB. You can either access a file because it can be access by anyone in the organization, or you cannot at all. There is no in-between. HTTP is not considered a first-class data access protocol, so the Unified Permission Model that would allow a user to authenticate before being able to access a private file, does not apply.
However, with the recent introduction of S3 starting from OneFS 9, I believe the necessary plumbing is already there for HTTPS to also be elevated to a first-class protocol in the future because both protocols sit behind a web server under the hood. It does not sound like it would be too complicated to implement, but it would be a valuable feature and it is currently missing.
For how long have I used the solution?
We started exploring storage solutions for our environment back in 2012. We have been using PowerScale for nearly 10 years now.
What do I think about the stability of the solution?
PowerScale has never failed us. Since it was first installed, it has been running with almost 100% uptime since we started using it. We have only had to shut down the entire cluster once because we were moving data-centres. In earlier versions, sometimes you had to reboot the entire cluster for significant OS upgrades. Today, rolling upgrades are the norm, where only a single node is ever down at a time.
What do I think about the scalability of the solution?
At the beginning, we procured four initial nodes, which amounted to about 400 TiB of usable space. We now have just shy of 2 PiB of total installed capacity at each cluster. Our storage usage has grown quite a bit, moving from terabytes to petabytes, but I have no doubt that we will be able to continue growing at the same rate or even more in the future. The original Isilon had already been designed to scale to multiple petabytes, PowerScale will only continue to push that further. We highly value being able to grow our capacity without having to be concerned with platform limits.
PowerScale now also offers more choice when it comes to mixing and matching different types of storage nodes within the same cluster. For example, you can get all-SSD or NVMe nodes alongside old-fashion SAS disks, that you might want to consider adding when performance is critical in your environment. In our case, the performance we get without these new nodes is sufficient for our needs. The best part is that should we ever need to provide a faster pool of disks, there is no administration overhead to do so: just add the new node types, set the tiering rules that you want, and let the system rebalance itself. No partitioning, no moving data around yourself. It is transparent to the end-users as well as the administrators. You can even tier data to a cloud pool for the archive if you want! This simplicity is, again, one of the main reasons we decided to stay on the platform.
How are customer service and support?
I needed technical support on a few occasions, specifically while implementing multi-protocol access for Linux and Windows clients. There was an instance when my engagement with support had to run for longer than I expected, but that was because the solution I wanted to achieve was highly complex from a technical perspective. We had to escalate the issue a few times to the next tier of engineers until they came through with a solution. It was always an excellent customer service experience, and I can certainly recommend Dell EMC Support to anyone who asks.
That said, we only tend to contact Support when we are unable to resolve issues or find the answers with need in the product knowledge bases, or the community forums. The availability of product information online is both comprehensive and of excellent quality.
How was the initial setup?
The initial setup was straightforward. Since it was a green-fields implementation, we did not run into any issues. EMC, who later merged with Dell to form Dell EMC, even let us evaluate the platform in our own data-centre, so by the time we decided to procure the solution, all we had to do was to revert to “factory settings”. The longest part of the process was migrating around 84 TiB of data from our old data stores, as it happens with any data migration exercises. But once the data had been relocated, it became a matter of simply pointing the servers to the new data store entry points. Users were happy to take it from there, and were certainly overjoyed at the additional space they had to work with.
What about the implementation team?
It was a long time ago now so details are fuzzy, but we dealt with EMC directly, with the help of an integrator for some of the initial design and implementation. EMC was our primary point of contact for platform-specific support when we first started, and their guidance around the different features of the platform was invaluable.
Today, that same integrator continues to help us with ongoing procurement, simplifying decisions around which of the many available node types might be the best suited to our environment, or ensuring that we stay on top of our node refresh cycle as older ones reach end of life.
What's my experience with pricing, setup cost, and licensing?
Price was also a significant factor in our decision to go with PowerScale. The team at EMC, now Dell EMC, came through with a highly competitive offer that tipped the scales towards their solution. There was only one other solution around the same price point, but it could not match PowerScale on features. That other solution is no longer on the market.
The licensing model is interesting, because it is essentially “pay to unlock”. Most of the available features are software-defined, so they are already available in OneFS, the underlying Operating System, waiting for you to activate them as needed. There are a few additional costs, however. NDMP backups require you to install fibre cards, which are sold separately. Then of course you have the cost of tape and off-site storage, but you would have those same costs with most other platforms. Luckily, we do not need to back-up the whole cluster because we can rely on cluster replication and snapshots (on both source and target clusters) to achieve our RPOs. But we do have a legal requirement to preserve some data for an extended period, so we use tape for that.
Which other solutions did I evaluate?
We evaluated three other competing solutions based on multiple criteria. Some of those solutions no longer exist, or have evolved into a different offering. We went through a rigorous evaluation process, which assessed the platforms’ scalability, ease of use or complexity to administer, performance, and of course TCO. Isilon was the brand name that blew all others out of the water. It was an easy decision for us to make based on the criteria we set.
What other advice do I have?
I give Dell EMC PowerScale a high 9 out of 10. It is not quite a 10, mainly because we do not have a use for all the features it provides, which you need to be aware of from a security point of view (eg: to ensure that they do not introduce unexpected risk). The ecosystem has also grown to be somewhat more complex in terms of the many different types of nodes that you can have. This gives you a lot of flexibility, but it does go slightly against the idea of simplicity that was so attractive initially.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Working Student at HELLA
Comes with good performance but improvement is needed in CLI and search options
Pros and Cons
- "Dell PowerScale's performance is good."
- "The product needs to improve CLI since commands are complex. The search option is also difficult since you must give the full path."
What is our primary use case?
We use the solution for NFS.
What is most valuable?
Dell PowerScale's performance is good.
What needs improvement?
The product needs to improve CLI since commands are complex. The search option is also difficult since you must give the full path.
For how long have I used the solution?
I have been using the product for more than three years.
What do I think about the stability of the solution?
Dell PowerScale is stable.
What do I think about the scalability of the solution?
My company has more than 1000 users for the solution, and it is scalable.
How are customer service and support?
Dell PowerScale's support is good.
How would you rate customer service and support?
Positive
How was the initial setup?
Dell PowerScale's deployment is complex.
What other advice do I have?
I rate Dell PowerScale a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Download our free Dell PowerScale (Isilon) Report and get advice and tips from experienced pros
sharing their opinions.
Updated: November 2024
Popular Comparisons
IBM FlashSystem
HPE 3PAR StoreServ
NetApp FAS Series
Hitachi Virtual Storage Platform
HPE StoreEasy
Hitachi NAS Platform
Huawei OceanStor 9000
NetApp Private Storage
IBM Scale-out NAS
Panasas ActiveStor
Sonexion Scale-out Lustre Storage System
Buyer's Guide
Download our free Dell PowerScale (Isilon) Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- EMC Isilon vs. Sonexion Scale-out Lustre Storage System
- How to backup Dell EMC PowerScale (Isilon) with Veeam or an alternative tool?
- What is the biggest difference between EMC Isilon and NetApp FAS Series?
- How would you compare the performance of DDN Storage vs Dell EMC PowerScale (Isilon)?
- When evaluating NAS, what aspect do you think is the most important to look for?
- EMC Isilon vs. Sonexion Scale-out Lustre Storage System
- What is the difference between NAS and SAN storage?
- What are the top 8 Network Attached Storage (NAS) devices?
- What advice do you have for people considering NAS storage?
- What is the best way to migrate shares from Windows Cluster Server to Cohesity?