We use PowerEdge for the virtualization of servers, and it gives us the ability to move server images on and off of the platform very quickly.
The PowerEdge Rack Servers are a go-to for handling high-performance workloads. I've had positive experiences with the amount of computing that it can provide per blade. Currently, the blade that I'm familiar with is the MX740c, which has dual processors and a total of 24 core processors. There are eight of these blades in the MX7000.
It also provides the ability for networking on the backside, which connects to the mezzanine. I currently use the MX5108, which provides four 25 gigabits-per-second connections to each blade. Each 5108 can provide you with a 100 gig uplink to your core. I currently have the MX5108 connected in two fabrics, A fabric, and B fabric. Both A fabric and B fabric are peered using a VLTI.
Then, I have the VLTI from the two blades connecting and uplinking to our distribution core. The distribution core is using a leaf spine. With that, it gives me 400 gigabits of uplink and downlink onto the chassis.
You can't have computed performance without using more power. That said, when I consider the power consumption and performance of the MX740c, depending on how much memory I install on each blade, I get the best bang for my buck. I'm not going to say that it's inexpensive or that it's sufficient. It depends on how hard I am processing, what I am running, how much memory I use, and again, what blades I purchase with the chassis. Overall, it's very flexible, and it depends on what I want to make of it.
With respect to its performance when it comes to running the latest high-demand applications, depending on my selection of hardware, it should be able to run nearly anything I would want. If I want to run Oracle servers on the PowerEdge blades, for example, then I can do that. They'll run it.
Recently, I've seen my use case migrate from the M1000 chassis to the MX7000 chassis. The improvement that I saw was increasing the uplink bandwidth from the M1000, which I was able to get a maximum of 160 gigabits a second, and now my maximum is 400 gigabits a second. I could have selected different switches, but the MX5108 is adequate to provide the uplink bandwidth that I need from the chassis.
Overall, I've seen an improvement in the network bandwidth, as well as an improvement in the speed of the blades and the processors.
The PowerEdge has also helped to reduce data processing time in the company, which makes things run better because it's faster to move data onto the blades. It is also faster when it comes to the deployment of computed images. It's hard to pinpoint how much time we have saved because it also depends on the network infrastructure that's in place. In my experience over the last couple of years, migrating from the M1000 to the MX7000 has moved the deployment of images from a few minutes to several seconds.
The MX7000 gives us the most concentrated amount of computing in the smallest area possible. It also has the ability to provide a large amount of bandwidth to the blades. This is important because it gives the user the ability to move as much data on and off of the blade platforms as quickly as possible.
The iDRAC telemetry is very useful for monitoring the system and providing analytics. You can use commands from the CLI, you can use scripting, you can use the REST interface, or you can use the point-and-click GUI. It's very flexible. I prefer using scripts because I monitor many blades and many chassis. I can script a lot of my monitoring requirements.
The accelerated GPU feature helps to support demanding workloads that we run. For instance, they provide better performance for remote desktop sessions.
The blades are hot-swappable and in a virtual environment, being able to upgrade your hardware platform easily to better and faster hardware is a benefit.
On the MX7000 platform, they should continue to release better and faster blades.
I have been working with Dell PowerEdge Rack Servers for the past couple of years.
Stability-wise, this product is solid. We have very little downtime.
I need to make sure that the images that are running on the blades are reliable, and it provides that. Beyond that, I'm happy with the performance.
Scalability is up to the engineer. It is easily scalable depending on what native architecture you use to connect it all together.
I have been in contact with technical support a lot. Sometimes I run into little anomalies that I need an explanation or workaround or fix for, and by bringing it to their attention, they usually get their developers on it and come back with a solution rather quickly.
I would rate the technical support a ten out of ten. We have really good Dell support.
I have worked with other solutions from other vendors, and I like Dell Power Solutions. I worked with them years ago and then went to a different vendor, on a different job course, and in this particular job I've come back to Dell.
I've got to say that Dell hardware and support are very good, and I'm happy with it.
The initial setup is straightforward. For somebody with experience with Dell products, to begin with, it's simple. It's no more complicated than deploying the M1000, which was the predecessor chassis to the MX7000.
I can deploy and network an MX7000 chassis and have all the blades loaded with ESXi within a day. I make use of a lot of my own scripts and usually, I employ a script to mount the ISO images that'll be installed on all the blades through a shell script, and then the script also reboots the blades, and the blades mount the ISO image and install VMware. All of that happens quickly.
After that, I simply put in the network parameters for the ESXi hosts, add the host to the V-center, and then they're ready to go. I already have some predetermined configurations that I use for the network blades, the MX5108s. I use those as a template for all four blades on the back of the MX7000, and simply paste them in. I can usually have all four blades configured within 30 minutes to an hour.
The pricing is very competitive.
When you compare against public cloud solutions, having the compute onsite is always going to be faster. However, that really depends on how big of a pipe your institution or your data center has to the cloud. If you have more bandwidth to the cloud and back, perhaps latency will be less but I don't see how it can be faster than having the compute on site.
This product has built-in security features, although that's up to the system engineers and network engineers to properly upgrade their firmware. They need to follow Dell's baseline release for the chassis to ensure that the firmware and software for the baseline of the blades and the network cards meet the baseline requirements. If you can match those requirements then the security will follow. It's easier to manage when you're baseline is all matched.
Overall, this is a good product but there is always room for improvement.
I would rate this solution an eight out of ten.