For an x86 infrastructure.
- Databases
- Virtualized environment
- Application servers
- Exchange and analytical applications
Linux and Windows are the OSs.
For an x86 infrastructure.
Linux and Windows are the OSs.
This has drastically reduced our datacenter space, has good cooling and power consumption. Cabling complexity and volume have been reduced.
The tool is pretty good in terms of managing data, compiling the system, and expanding the infrastructure. We are involved in property management. We use it for database and web hosting, and use it for applications that run on multiple systems.
It helps our company with backend capacity. It supports the infrastructure. It's performing well.
I would like to have a single console where we could manage multiple data centers. I'm expecting something like hardware visualization. I have different data centers where different BladeSystems are running.
Whenever there is an event, I need to get into that system individually and manage from different consoles. I would like to see a centralized console for BladeSystem management.
We have got multiple blade chassis, that are managed on an individual console. Having a centralized single console to manage all the chassis, would be easy for us to handle them in case of an event or for troubleshooting. For example, Cisco has UCS Director for managing multiple data center, similarly, if HPE can provide some centralized managing as well, then it will be great.
The tool is stable, although we have frequent failures in certain parts of it. This might be because we are using old generation servers, such as the HPE BladeSystem C7000 chassis and B400C Gen6 servers.
We haven't done much scaling. We are managing the individual BladeSystems.
I have used the technical support team and they are good. It is a straightforward process. I have been with them while they provided service.
Before this solution, we had individual rack-mounted server blades from the DL Series. Those are being consolidated into the BladeSystem now.
I was involved in the installation and it was straightforward. We had support from the local vendors as well.
We did not look at anyone else. We did not use any other HPE products, as such. We have bought into the Cisco solutions, as well. That keeps expanding.
I would definitely recommend this tool.
We are using it for our hypervisor. Within our organization, there are roughly 2,000 users, using this solution.
Software-defined hardware might sound counter-intuitive, but it is the way of the future.
HPE BladeSystem is easy to use. It conserves a lot of space.
The management side of this solution could be improved. In our case, it's managed via OneView — it's an appliance. It could be better, its interface doesn't feel 100% intuitive.
In addition, HPE tech support is very poor.
I have been using HPE BladeSystem for three years.
HPE BladeSystem is both stable and scalable.
A contractor implemented this solution for us.
I would absolutely recommend this solution to others. Overall, on a scale from one to ten, I would give HPE BladeSystem a rating of eight.
The virtual connect side of networking and the manageability through that is by far the biggest win for us. The blades come and go as racks do, but the virtualization back of it means a lot less hands on and a lot more manageability.
The biggest benefit is the minimum downtime due to the programmatic nature of the whole thing.
There’s nothing that I don't already know is coming out.
We've had them for quite a few years now. Early on it was a bit hit and miss, but more recently it has become far more stable.
We have not gone full scale with it. We only have it in small areas of the data center at the moment.
We have not used technical support.
We knew we needed a new solution because our data center costs were rising on our racks and we just had to slimline down into a more compact solution.
The early stages weren't as smooth as they should have been. I was involved in the initial setup and it was complex because of the nature we wanted to use it in; a very virtualized network and storage capacity. It wasn't quite straightforward and it meant a great deal of complex planning to make sure we got it right in the first place and the initial setup didn't cause problems later on.
We also looked at Dell solutions.
When selecting a vendor, reputation and pricing are most important.
Spend as much time as possible planning before you go anywhere near it.
At a very high level, it gives us flexibility. Being on a virtual system, you can upgrade or downgrade, depending on the performance we need for all our different applications. We've had situations where we've had downtime, but our application state hasn't been affected because it moved on to the rest of the blades. And then we've switched the faulty blades when we've needed.
At a very high level, what it does is it gives us the ability to scale up. It gives us redundancy. It's cost efficient in that sense.
As I’ve mentioned, the benefits are flexibility and the fact that we can scale up our environment as and when we want to.
I'm probably not the right person to provide any information, but I guess I would like to see monitoring, real-time monitoring of the performance of the estate. We do basic monitoring of our estate. I'm not sure how robust it is, whether it can see into the future and understand where there are faults occurring.
From an application point of view, I want to avoid redundancy as much as possible, and I want to avoid downtime. I want general performance. Anything that helps that situation would be best.
I haven’t rated it higher because of stability and monitoring capability.
We've been using it for a while now. We've been using it for about four or five years, and we've probably had about three or four critical incidents. Over five years, that’s not too bad.
Blades have malfunctioned, so we’ve had to switch over. Physically, those blades have had to be replaced.
Every three or four years, we review our hardware estate. We're going through a process right now to increase the capacity in our estate. We do a complete application review and we understand what infrastructure environment is needed to support that.
We get good service from our reseller; I rate them 7/10.
Do your ground work. Understand not only what you need right now but what you need in the future because technology's changing and evolving. Do a fairly good due diligence about what your estate will be needed for the next couple of years, in the future.
Look around. Shop around with multiple resellers to get the best price.
The HPE BladeSystem is a universal platform for server infrastructure. It is easy to manage and to connect to your other infrastructure, fiber channel network, and so on.
It's mainly focused on management and reliability. It's a fairly reliable platform, almost no outages. It works perfectly.
It could be improved in terms of management, in terms of uptime. When you do the firmware upgrades, it's not acceptable; we have downtime issues. It’s not good with that, but it's getting better and better.
We have been working with HPE BladeSystem C7000 since 2007. Until recently, the firmware updates on the connectivity modules (FC and Ethernet) and Virtual Connect could not be done without downtime. For an enterprise system, this is not acceptable. It is only since last year that we did the first online upgrades without any downtime.
It's very stable; just minor issues; no big issues.
You scale within the enclosure. You get 16 servers and then you can buy extra enclosures. It's scalable.
Technical support is OK. I'm not directly working with support myself, just indirectly. But from what I hear from the engineers, it's OK.
We previously used other HPE servers, just the old ProLiant servers and other lines. We converted to BladeSystems and these products.
Initial setup is quite complex. You have to think before you start.
We chose this solution quite a long time ago. I don’t remember what else we considered. We chose HPE because we were already an HPE customer.
Invest in preparation. The HPE BladeSystems are being succeeded by the Synergy systems, announced last year. That's the successor, so look at that.
BladeSystem provides consolidation of hardware into single or more manageable components. Everything from FlexFabric, Virtual Connect, being able to manage your environment holistically from a single pane of glass, in terms of vCenter, and blade integration. I think the other thing is with HP's OneView, having another standardized management console to be able to manipulate pretty much everything from a blade's infrastructure component point of view as well as networking. Anything in the HP product line, the infrastructure can be managed through OneView.
There's definitely great advantages in the efficiency of time savings, both from a personnel perspective as well as the ability to quickly deliver on new offerings.
At the time, we were trying to learn the technologies while we were setting up the data center, and that's why we used professional services. We ended up having to collectively learn on the fly in setting up some of the new features we had. This was three years ago when we set up two new data centers and moved our operation out of an outsourced line of business.
It's been really stable, we haven't really seen any problems.
We've been expanding most of it, going with solid state storage has been the latest set of upgrades that we've done to it, and continuing to grow that. From a backup standpoint, we're also looking if we can start to use a lower tiered storage and use that to house all of the backups that we'll do, so we can get off using tapes as part of our whole strategy. We've got nine branch centers that ultimately are consolidating into the data center, so we're trying to fan those down into the data center and back them up.
In terms of the overall support, you're dealing with enterprise infrastructure related support personnel. If you're paying for enterprise level of support, and again, being such a foundation of your infrastructure, when there's issues they're usually critical, and the expectation is that you get immediate response. The experience that we've had is that sometimes you get right through to a qualified individual from to start, otherwise sometimes you have to play that escalation game, which in an emergency situation can be a little bit of a headache.
I would say sometimes it's hit or miss in terms of the kind of support you do get. Traditional hardware replacement, usually isn't a big deal. HP's remote support is really responsive in terms of hard drive failures, things of that nature. I find that the level of technicians that you get when you're calling in for any kind of technical support you may need, really does kind of vary. As an enterprise customer who's paying for enterprise level support, when you call, you call because you're in the middle of a catastrophe or you have an emergency situation that you're working through, so having to work through multiple tiers of technical individuals who may not have the necessary levels of strength, does not help the process.
We had HP Blades at the previous location, so we just bought the next generation of blades, but it was the same enclosure and some of it we did actually move across as we bought some initial hardware to seat things, and then as we freed up from our managed site we could then bring some of that technology across and continue to scale up in the new environment.
We did use some technical support, like through the professional services. We actually found some good, and some not so good, in terms of the expertise that we had. They didn't know enough. When we came around to setting up our VMs with the network they had, we had some challenges. There was a bit of a learning curve on both organizations. Not all positive.
I would say that the licensing model is probably the one biggest caveat I have. A lot of vendors provide a licensing model whereby you have to license the different functionality and feature sets that you want, but I think that for a lot of customers that's a bit of a stumbling block because you may not always be able to, upfront, understand or know exactly what you want to utilize, and have to make that additional investment later, when the dollars may not be there, is a little bit difficult.
If you're looking for a unified management interface where you can manage multiple products through a single pane of glass, like OneView for example, it might make sense. If you're heavily invested in the HP product line, again, it might make sense. But really in this day and age, computing is computing for the most part, so I think it really depends on what influences your purchasing decision, whether it's politics or technical merit.
We were growing beyond our data center rack spaces with our 1U2U rack mount servers. We had a lot of them. We had a lot of HP DL360 and 380 servers and we were burning rack after rack after rack. When we consolidated to blades we were able to reduce our footprint in the data center.
They probably already have a lot of the features introduced, I just don't know about them yet. I'm looking forward to using the Security Central console, which I know you do have to manage. It's a console to manage Aruba stuff, all your switches, ProCurve lines, blades, and chassis, all in one single pane of glass. I'll be able to look at all those components and how they're all working in and amongst each other.
I've used it for seven or eight years.
It has been very stable.
We use about half of each chassis that we have in place and we have redundant chassis just in case a chassis should go down. It has never happened but from a scalability stand-point we continue to increase the amount of blades we use in each of those racks, each of those chassis.
I actually haven't had to call them a lot. A lot of the information and issues that I have I've found solutions to online. It seems to be when I have called them and it wasn't about the chassis and the pro-curbs, or the switch line it seemed like they had information. The only downside was when I was looking to do an IOS upgrade, or a firmware upgrade, on the switch the way we had it set up was in a virtual stack and they were supposed to upgrade individually and the tech guy at this time gave me bad information and said they were only supposed to reboot one at a time as each switch upgraded. They all rebooted at the same time and caused an outage, which was unfortunate.
The price is acceptable.
I'd recommend it. But weigh the pros and cons of the points of failure. Because there are single points of failure unless you have two chassis in place. Also the power and the cooling consumption. Blades in the chassis seem to consume a lot of energy. We use co-location facilities so we don't have to think a lot about how much power and energy we are consuming because we don't owe the data center. It's just a fixed price for the rack. But if you own your own data center and you have to pay for the power and the cooling, blades and the chassis if you have them filled and racked and stacked, they can consume a lot of energy.