What is our primary use case?
It is where we do most of our compute for the various different things for our homegrown software that we developed and use. We also use the product for a third-party software that we do, using cloud-based services.
In a hybrid cloud environment, the solution enables us to a lot of databases, then different homegrown in-house developed stuff that we use for media servers and compression servers. We can also do management for workforces and optimization for workforces, in terms of the products that we provide.
How has it helped my organization?
We can get more density in the same physical footprint out of it, which has to do more with the density of the blades that go into the Synergy frames, because you can get less blades than you could with the old c7000s. There are just more cores and sockets with more memory available, so you can get denser with your applications.
We build out a whole stack at one time, so we don't have to worry about it until that stack is full, then that gives us time to get the next one ready.
What is most valuable?
You don't have to have networking in every single frame, just have the interconnects. You don't have the traditional A and B side in the sort of multiple LAG groups, and so you really can sustain a lot of loss. The other side of that is if you need to sort of push more bandwidth up, you can do it because of the interconnects in the networking, and the same goes for Fibre Channel as well.
What needs improvement?
The speed in OneView and how it updates the entire configuration needs improvement. If they can do that, and it could be a little more clear on what impact different actions will have for certain things, that would be good. They do give warnings for certain things, but there are other things where they don't really give you a warning, then you do it and it will be rebooting something like the host (or whatever). If that is in a production environment, that is really dangerous. This is our pain point.
For how long have I used the solution?
We have had it for maybe a year and a half to two years.
What do I think about the stability of the solution?
We haven't really had any problems once it was set up. The initial installation can sometimes be problematic.
We have had some weird issues with the networking and interfaces. We had an interface where if it was the first interface to join a LAG group it wouldn't come up, but if it joined second, third, or fourth, then it worked fine. We still haven't figured that one out.
The amount of time that it takes to update the entire configuration because it has to go and update so much stuff: It takes quite a long time. Then, the potential for downtime when you do that is also problematic, especially if you don't have a full three or five frame set that you are working with. If you are going from one frame to two frames or two frames to three frames there is a potential for downtime there. So, we have opted to go to full stacks when we implement them.
What do I think about the scalability of the solution?
It is scalable. You can manage with OneView multiple frame sets. We have chosen not to do that right now, but I can see where, as we get bigger, we'll want to implement that and maybe change the frame link up a bit so we can do that. However, we haven't done that right now.
How are customer service and technical support?
The technical support was pretty good. They were good to very good, depending on the issue.
Which solution did I use previously and why did I switch?
We had the c7000, and there wasn't anything new. We needed to move forward, so we could have a platform that we could rely on for the next ten or so years. Something that we could go and deploy, taking advantage of all the functions that it has.
How was the initial setup?
The initial setup was definitely different from what we were used to, so there was a learning curve. However, the more experience that we gain with it, the easier that it becomes. Every implementation has been sort of faster and easier than the previous one. We are to the point now where it is pretty straightforward for us.
What about the implementation team?
We used startup services for the deployment. The frustration with that was it was contracted out to third-party vendors, so it was sort of hit or miss for what you get with third-party vendors in terms of their knowledge. That was a bit frustrating.
We will probably always buy the startup services. However, we will do the rack and stack along with most of the wiring in terms of the network and Fibre Channel. Then, we will let them run the interconnects through the actual configuration of the enclosure itself with the startup services links.
Which other solutions did I evaluate?
We did look at Cisco UCS only because we thought it might be a good time to change things up, but we are really an HPE shop.
What other advice do I have?
Make sure that it will work for you, your environment, what you have in mind, and what you want to accomplish. If you have a lot of small points of presents which are located around the world, this may not the best solution. However, if you are in a big data center or colocated data center, and you will be doing a lot of deployments, then I think this is a good solution.
Right now, we are mostly configuring profiles, the configuration of the frame sets, and the logical enclosure groups manually. We are moving towards having Synergy help us manage our IT landscape. That is what we are trying to get to next.
We are not using it as a fully composable infrastructure because we have storage outside of Synergy. It is sort of a hybrid of what we were doing before and what composable infrastructure really is, so that is where we are at.
It hasn't decreased our deployment time yet, but it can potentially in the future. We are trying to get not only to servers that we deploy, but the infrastructure that deploys the servers. We want to get to the point where that is all configured and deployed using infrastructure as code. We are a long ways from that, but that is where we want to get, and hopefully, we will get there.
It was the next generation of what was possible versus the old stuff where it was very confined to one frame versus multiple frames or you could make it composable and move workloads around easier.
We don't really have Synergy for our development environment.
Biggest lesson learnt: Pay attention to the nuances it. Take advantage of all the stuff which is built into the system. A lot of times, we buy technology and only use one part of it. If you use sort of the whole suite, then it works better.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
I wanted to pose an update.
As technology moves forward copper and two fiber strand Ethernet cables should have 10/25 Gbps as the min speed with auto-sensing solutions. As finding auto-sensing optics is proving to be a problem, even if you do to manual configured as 10 or 25 Gbps would mean designing the blades be 25 Gbps with 50 Gbps by 2020 and providing options of 12 or 24 strand OM4 fiber connectors that would allow between two fiber links of 10,25,50 while offering 40, 100, and 250 Gbps uplinks by 2020. Adding focus on NVMe over Fabrics to expand storage beyond the blade at a faster design than normal storage solutions support.
Between 2022-2025 the chases should make power and fabric connections easier with the fabric may be GenZ based. GenZ may require cable plants to be single mode and may have a different mechanical connector justified by the eight times the speed of PCIe v3 we use today and being a memory addressable fabric and not just a block/packet forwarding solution.
The biggest issue to me in blades is lock-in as the newest tech and most options are shipped in rack configurations not in the OEM (think HPE or Dell) blade form factor. While the OEM are at risk of being displaced for commodity gear by the ODM (they supply the OEM) using components specified by the Open Compute Project (OCP), the impact of CPU flaws could trip up the industry. Some ARM vendor may step in with a secure low cost container compute platform in an OCP compliant form factor using GenZ to make computing and storage fabrics that are by design software defined.
In 2016 worldwide the 2 socket server was the most shipped, but 60% of them shipped with 1 CPU/socket. By 2020 the core counts of Intel and AMD should make it a world where 90% of systems shipped will be one socket systems. The high CPU capacity and PCIe v5 or GenZ will more radically change what we will be buying in beginning of the next decade which makes buying a blade enclosure today that you want to get 5-8 years of functional life like testing the law of diminishing returns. While the OEM may provide support and pre-2022 parts, post 2022 you will be frozen in technology time. So enclosures that fully populated with 2019 gear may provide value any empty slot/s will be at risk of being lost value.
While I wait for better blade enclosures to be designed for the problems of the next decade not the last decade, I think that buying rack mount servers for enterprises that buy capacity on a project by project funding basis is the best solution for this gap in blade value to design limitations. As the costs of using rack servers will be direct per project, the re-hosting/refactoring in the next decade to the next great hosting concept will be easier to account for while minimizing the orphaned lagging systems that tend to move slower than the rest of the enterprise.