What is our primary use case?
The primary use case is virtual machines. We run VMware on them and virtual servers, so applications, web servers, and things of that nature.
The solution enables us to run VDI, backups, and web platforms for our organization in a hybrid cloud environment.
How has it helped my organization?
It makes life easier for us when we are deploying new technology, as we have the building blocks already in place.
It will put everything all under one umbrella, when we get to the point where the majority of our systems are all on Synergy. At this point, we are only 16 or 18 frames in. However, once we get everything onto the Synergy platform, they will all be manageable under one umbrella, and it will all be standard infrastructure.
The solution helps us to implement new business requirements quickly. This is primarily from the standpoint of being able to deploy new servers and machines. As requests come in, we can turn them around within a matter of a day or two because we already have the building blocks in place.
What is most valuable?
We have been able to give the deployment team what they request more quickly. We are able to quickly deploy what is being asked of us. If the development team needs a platform of 20 servers to run a particular platform, we can give that to them within a day or two.
What needs improvement?
It would be nice if the OneView umbrella could truly be one view and cover everything. Synergy has its own version of OneView. ProLiant Servers have their own version of OneView, so it truly isn't one view. We also have other platforms within HPE that aren't covered by OneView at all. We have many views instead of one view, and it would be nice if that could be resolved. That would help us a lot.
The timeliness of updates, firmware, and things of that nature needs improvement, as far as what we have to apply, and when, being able to maintain a consistent load on each one of our frames.
Buyer's Guide
HPE Synergy
January 2025
Learn what your peers think about HPE Synergy. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
What do I think about the stability of the solution?
The stability is not good right now. We have had a couple of outages where we have receive very good support from HPE. However, we have not been able to come up with cut and dried reasons for why the outages have occurred. They have not been able to be reproduced, so it has been difficult getting our trust back.
We still have some questions regarding the stability of the platform.
What do I think about the scalability of the solution?
The scalability is very good.
How are customer service and support?
From a technical support standpoint, it seems as though the platform came out more quickly than the technical support behind it did. It is much easier to find good tech support people from HPE on the older product line as opposed to Synergy. Synergy is a bit more limited.
Which solution did I use previously and why did I switch?
We came from a blade environment. Now, we are on Synergy. It is a continuation of a product line that we have been using for well over a decade, and it is just familiar territory.
We were already heavily into c7000 blades. Synergy is a continuation from c7000s. From our standpoint, at least from the server standpoint, the functions are basically the same.
The c7000 blade is retiring. Synergy is the next iteration of blade servers. Synergy is the next rendition of this type of platform, and it felt like a logical fit for us to move in this direction.
We are able to deploy much more quickly than if we were running physical equipment or rack servers.
How was the initial setup?
The initial setup was fairly straightforward. It was just the basics that we would have expected in using a product that we were already familiar with in OneView.
We did use HPE’s Pointnext services, and our experience was okay.
What about the implementation team?
We used both a reseller and HPE for our deployment.
What was our ROI?
If I look back at the days when we were deploying physical equipment or just rack mount equipment, as needed, the product has saved us weeks.
It's a relatively new investment. If anything, it has increased our costs at this point.
Which other solutions did I evaluate?
We have a long standing relationship with HPE, between the technology, pricing, and so on. It was a good fit.
What other advice do I have?
The HPE Synergy is a good platform, but they need to look at management and updates to make sure that they know what they are getting into.
HPE continues to make a good product. There is no doubt about that. It is a possibility that we could have jumped into this a little too early. It would have been nicer if it where a more mature product when we jumped in. Sometimes waiting a bit can be beneficial.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
I wanted to pose an update.
As technology moves forward copper and two fiber strand Ethernet cables should have 10/25 Gbps as the min speed with auto-sensing solutions. As finding auto-sensing optics is proving to be a problem, even if you do to manual configured as 10 or 25 Gbps would mean designing the blades be 25 Gbps with 50 Gbps by 2020 and providing options of 12 or 24 strand OM4 fiber connectors that would allow between two fiber links of 10,25,50 while offering 40, 100, and 250 Gbps uplinks by 2020. Adding focus on NVMe over Fabrics to expand storage beyond the blade at a faster design than normal storage solutions support.
Between 2022-2025 the chases should make power and fabric connections easier with the fabric may be GenZ based. GenZ may require cable plants to be single mode and may have a different mechanical connector justified by the eight times the speed of PCIe v3 we use today and being a memory addressable fabric and not just a block/packet forwarding solution.
The biggest issue to me in blades is lock-in as the newest tech and most options are shipped in rack configurations not in the OEM (think HPE or Dell) blade form factor. While the OEM are at risk of being displaced for commodity gear by the ODM (they supply the OEM) using components specified by the Open Compute Project (OCP), the impact of CPU flaws could trip up the industry. Some ARM vendor may step in with a secure low cost container compute platform in an OCP compliant form factor using GenZ to make computing and storage fabrics that are by design software defined.
In 2016 worldwide the 2 socket server was the most shipped, but 60% of them shipped with 1 CPU/socket. By 2020 the core counts of Intel and AMD should make it a world where 90% of systems shipped will be one socket systems. The high CPU capacity and PCIe v5 or GenZ will more radically change what we will be buying in beginning of the next decade which makes buying a blade enclosure today that you want to get 5-8 years of functional life like testing the law of diminishing returns. While the OEM may provide support and pre-2022 parts, post 2022 you will be frozen in technology time. So enclosures that fully populated with 2019 gear may provide value any empty slot/s will be at risk of being lost value.
While I wait for better blade enclosures to be designed for the problems of the next decade not the last decade, I think that buying rack mount servers for enterprises that buy capacity on a project by project funding basis is the best solution for this gap in blade value to design limitations. As the costs of using rack servers will be direct per project, the re-hosting/refactoring in the next decade to the next great hosting concept will be easier to account for while minimizing the orphaned lagging systems that tend to move slower than the rest of the enterprise.