Primary use case is for automatic deployment of VMware guests.
It's performing as we want. We're not really asking anything too complex of it, but it does what we ask of it.
Primary use case is for automatic deployment of VMware guests.
It's performing as we want. We're not really asking anything too complex of it, but it does what we ask of it.
Our organization started to move a lot more towards automating all the things that we can. We're catching up to that, but we're definitely heading in that direction. It's one of those things that enables us to tie in with our other pieces, with automating the operating system, etc. VMware is then able to automate the build of our virtual machines.
In terms of infrastructure agility, we're still getting our feet under us in some areas, but it's definitely playing it's part and doing what it does well.
Our speed of provisioning has also improved. We used to build systems manually, which would take four hours or a day. Nowadays we're able to spin something up off a template that we update every so often and it takes about 20 minutes. We can take an existing template, build it back up, add some configuration for it, specific applications, turning things into what the developers need, and then we can have them deploy it off that. It makes it so that we can have customization within a framework.
The most valuable feature is the integration with some of our other automation platforms. We're starting into Jenkins, and it has a plug-in for other automation of operating systems and things. So it works together with our infrastructure. We don't have a very complex environment, we don't have NSX yet or anything really crazy, but all the things we do have, it has been able to interoperate with them.
It is intuitive and user-friendly. It took some growing. We had to figure it out in the beginning, but that was a couple versions ago. We like the improvements that have been made over time, so it's definitely been able to progress with the environment.
We don't have too complex of an environment, we're not doing machine-learning or any of the advanced features all that much. We're a pretty straightforward IT shop. We just provide servers, and then, from there, it's what the customer wants. The next step we would probably like to see is to have a customer portal, so instead of our having to punch the button, the customer could. But I believe that VMware offers enough that setting that up is more on us, rather than waiting for them offer it.
We needto learn more, advance our usage of the product. We're doing what we can with what we have, but we have to learn a bit more. Better training, or training modules, wouldn't hurt. I haven't personally looked through what the portal has, but more training is always good, so we could take a new employee and point him to the training and get him up to speed quickly. I have had 10 years or so experience with VMware, but I'm the old the guy in the department. Everybody else is newer than me on this and not everyone has my experience. So the training would be nice.
I've been impressed with the stability so far. It does what we ask it to. That's always nice. You don't have to think about it. We haven't had any downtime.
We can scale it up or down. We haven't needed to yet, but we can.
We haven't had to use technical support. I do a lot of blog reading, so I look up my answers on my own. But tech support, on other issues, has been where we need it to be.
We weren't using much. This was right at the beginning of when we were starting to automate things. We saw the VMware automation and decided that, since we had VMware, it would be the logical choice. And then we started with Jenkins for a lot of our other operating system features. Jenkins, of course, has plugins that talk to VMware natively, so it was a natural fit.
When selecting a vendor, the biggest thing for us is multi-operating system support. There is the classic divide. I'm on the Windows side. We have a Linux department also. When looking at different tools, something might be better for Linux but we have to have something that will work for both of us. We don't want to have two different tools for two operating systems. Whereas the Linux team wanted to use Puppet instead of Chef, Chef supports Windows and Linux both, better. The nice thing about VMware, aside from it being a lot more OS-agnostic, is that both teams can use the product. One product for both operating systems. That was one of the primary things. We could have a tool that runs great, but it might be a situation where, "Oh yeah, your Windows support is lame." That's the big thing for us, the interoperability between operating systems.
I thought the initial setup was straightforward. The biggest thing, once we had it set up, was to integrate it with the vCenter, but that was pretty straightforward. That was part of the workflow. It is automated within the product as part of the initial deployment, which is really handy.
The upgrade experience was also quite easy.
Better pricing is always handy, but I feel it's at the right price point.
There were not too many on our list. VMware was the natural fit. We saw the automation. We liked it. Chef, technically, will do automation. It has connections into VMware. We preferred having the VMware automation handle it. Chef will do it, but it doesn't have as many things. We would have had to write a lot more tools for it. It's one of those things where, instead of Chef's being the one tool to rule them all, where we do that for everything, we branched out to VMware automation to handle its subset.
Jenkins is a Swiss Army knife. It will do literally everything. The problem is that you have to tell it to do everything. You have to build all of the features into it that you want. There's a language to do it, but it just says, "here's the entire toolbox, do whatever you want." It doesn't have as many pre-packaged things. VMware has the ability to build things, but it has a lot of things preconceived, which is very handy. If I just need the basics, I need to stand up some VMs, it already has those workflows built in. Jenkins doesn't have nearly as many things built in. They can both expand to what we need, but VMware had some pre-provided things that were very handy to get off the ground quickly.
vRA has a very nice toolset for being able to integrate with VMware. It is great for being able to automate things within the VMware environment. We probably need to learn more about it, so we can fully realize its use, what the plugins for other things are. But it's doing everything that we need for now. We've seen that it has room to grow with us.
We use it for the deployment of new environments and multiple stacks, as well as deployment inside of NSX. It is also used for easy application deployment and container management.
We can do scripting and do customization after deployment. With vRA, we can integrate everything with a single-click. Then, there is also track management and change management control.
The repetitive tasks which took provisioning storage, network, and compute two to three weeks, now take five minutes.
I like the automation that it provides to deploy VMs and multiple apps. The integration with NSX and AWS for endpoints, which allows us to manage workloads, such as the comparison that it does between different VMs. It can do this in AWS or Azure.
Any new VM admin simplifies deployment. Instead of only deploying templates, we can deploy blueprints which are easier on day-to-day operations for an organization.
VMware should go the way of vROps, with everything in one machine, the ability to scale out, and a more distributed environment instead of having the usual centralized SQL database.
Three-tier environments are not scalable.
They need to get away from Windows.
It depends, because you are still dependent on the Windows machine that does all the requests and pulls from other agents. It can scale out if you size it right the first time.
We used technical support with previous versions.
We knew we needed a new solution when we were falling behind and could not deploy what the business units needed.
The product has come a long way. Now, it is more streamlined and GUI-based.
I have done parallel upgrades, then used my grade settings for it.
We also evaluated CA.
We chose VMware because we are a VM shop and the product allows multiple endpoints. We could also have endpoints for AWS.
While it's user-friendly use, you need to know what you are doing with it.
Get your requirements beforehand. Make sure of the services that you want to provide and have them nailed out. If you are just writing VMs, then you don't need vRA. If you are providing services, you're going to become a broker of services to people, so you have to plan ahead. Also plan the workloads that you're going to be providing because they will consume a lot.
The ability to provision to on-prem and public cloud using a standardized set of blueprints.
It has reduced provisioning time from roughly three to six weeks to about an hour on a private cloud, and about 25 minutes on public cloud.
The ability to provision native cloud services as well as the ability to provision Azure VMs in the same way we provision AWS VMs. Right now, it's a broken process. Azure is kind of a work around. It would be good to have native address support and paths servicing offerings from Azure and AWS offered natively through VRA.
On a scale of one to 10 stability is a seven.
There are a lot of moving parts and we often have difficulty with like an individual service on one of the components failing and bringing down the entire stack, and that's pretty regular. We've been using it since version 6 and that's been pretty consistent. As the components have been compressed, it's gotten better, but for each of the Windows servers and components that we have, there are regular service failures.
Scalability is excellent.
We use BCS and that makes a difference. Typically, it depends on what time of day we're calling and what region we're in. Usually out of Cork, Ireland it's pretty good and out of the U.S. it's good. But when it gets sent overseas we do have some issues.
Other than that, support also has a problem with complexity. For a vanilla build of vRealize Automation, they generally know how to support it very well, but because we have a lot of customizations - we have a lot of custom software components and integrations - by the time we're able to get the support call up to speed on what's going on, we've generally figured it out on our own. That's not to say it's anyone's fault, it's just that we have a lot of customizations in there.
When we call we don't always get the same person. Sometimes it requires an escalation and we eventually find someone whose good. But it's something like every third time that we get someone who is good from the beginning. Other than that, two out of three we'd have to work through an escalation process.
We were using vCenter Orchestrator just by itself but it was only used by our internal teams to build for other users. vRA has enabled us to give self-service to all the end users.
In terms of switching, honestly, a VMware sales team came by. We were getting complaints from a lot of our end users on provisioning time, and we would generally get people that were requesting more than they needed because of the time constraints. So we wanted to simplify the process and make it a self-service portal and that was the reason to switch.
It was the best solution at the time we started the project, which was about two and a half years ago. It may not now, be but we are pretty heavily invested in the stack so we don't want to throw all that money away and kind of switch platforms and start from scratch again.
The most important criteria when picking a vendor is their ability to solve a problem that we have; and then second would be cost.
DynTek. We used Presidio as well as ServiceNow.
Really look at the competition that's come a long way. Cisco's product, ServiceNow's product, Red Hat even has a product that is competing and, depending on their workload type and their end point type, there are potentially better solutions. But if you are a fully integrated VMware environment, this is still the best option.
Regarding implementation, you should have a very well documented process for your current provisioning. You should have documented all the types of workloads and blueprints you would potentially need based on user demand, not based on what the admins think. We made that mistake. We offered what we thought the user would want and most of the blueprints we created went unused. But then when we went the opposite way in the newer release. We basically poled our entire community and they gave very specific responses. So, focus on what the users tell you they want otherwise they're not going to use the product.
Preparation of Hybris Commerce HY300 training laboratory environments and Hybris Expert Services demo infrastructure went from days of effort down to hours. Reliability and consistency is no longer a concern.
Code maturity is reaching a point where refactoring some internals will be important to maintain the rate of improvement. The software has evolved at a breakneck pace, and there is a lot of legacy code which needs refactoring and cleanup.
This doesn’t affect the operation of the software as much as it affects the learning curve for the open source community. If the code gets messier and messier, then community involvement will taper off.
Major architectural features, like the transport system for example, have been subsequently refactored. When I wrote the review, SaltStack had decided to replace ZeroMQ for extremely large scale operations, and embarked on a novel approach RAET. This appeared by early estimation over engineered and under tested, and lost momentum. Without missing a beat, SaltStack rolled out an asynchronous TCP transport option that was both simpler and more scalable. This was received well by large operations depending on SaltStack. This is a major refactoring win, and a testament to the maturation of the software.
Contributing to SaltStack could be difficult as their internal development processes matured. One symptom observable from community contributor not long before I wrote my original review, was git history rewriting. I’m not going to go down the rabbit hole about why this is bad, but I will say that this hasn’t to my knowledge happened since. I once worried this difficulty would be a barrier to progress at SaltStack, but I am no longer worried.
In particular, I was working with salt-cloud when I authored that review. Since then I have seen considerable attention paid to refactoring code I thought was problematic. They have a mature API deprecation process, which is not 100% executed (things get deprecation warnings, but the deprecated code can remain longer than declared). Even that has been improved, and in the mean time a lot of new functionality has appeared without affecting the quality of existing code.
Conventions around using salt, like formulas, testing methodology, and new functionality like the Salt Package Manager have added to the maturity of SaltStack. These conventions enable commercial and open source contributions to the SaltStack DevOps ecosystem, increasing the rate that SaltStack accretes capabilities without adding stresses to the core development at SaltStack.
We have used this solution for a year.
I did not encounter any issues with stability.
I did not encounter any issues with scalability.
Technical support is excellent.
I have used Chef. Chef is harder to teach, so it is more difficult to build an internal community around the toolset.
There are multiple ways to do the initial setup. The documentation is clear, but could be better organized.
It’s free until you need support. It will deliver a lot of value prior to production exposure, but you should plan to get an enterprise SaltStack license by the time your DevOps iterations can deliver reliably to QA.
We evaluated Chef, Puppet, and Ansible.
Make sure you have cross-functional collaboration between your development teams and operations teams.
Develop configuration as code in parallel with code development.
Use SaltStack to deploy and control both development sandbox environments and also full scale test and production environments.
Cross-platform Windows and Linux support: We run a Windows infrastructure within AWS with several key services deployed on Linux instances.
We have been able to integrate with AWS to deploy continuous delivery services with an extremely quick turnaround time. Salt lets us manage those instances, and control the deployment seamlessly.
Windows support and support in general: Getting responses to problems can take weeks or months in my experience. Windows support is advertised as a first-rate supported platform; however, it is ripe with issues that have added countless hours to our roadmap. Documentation is also severely lacking for much of the Windows platform support, and in many cases I have had to resort to third-party blogs and tutorials for resolving problems.
I have used it for nine months.
I have encountered stability issues with Windows support in AWS/EC2.
I have not encountered any scalability issues so far.
I rate technical support as 3/10. The only support we get is through the mailing list or through GitHub. They have offered a higher level of support for $20k, but we haven’t seen anything to indicate the value in doing that when the platform as a whole has issues that should have been tested before being deployed.
Initial setup should have been straightforward; however, documentation issues and bugs in general caused this to take a very long time.
The software is open source and free; however, things that should be tested for stability (like Windows support) are not fully vetted, and it’s unclear if a paid support offering would actually resolve those problems.
Don’t rely on the SaltStack documentation alone; use Google and other resources to find help, if you are not going for paid support. Windows support is lacking but you can overcome the issues with a bit of ingenuity.
Our use case is infrastructure automation, like self-service.
We utilize all the blades that we had available in the computes, mostly going into VMware vCenter.
When I have been using it, it has been mostly for private compute.
We provided the ability to request virtual machines to our end users. Before, this was a very manual process, which took engineers to do. Now, it's an automated process.
vRA has enabled us to leverage existing VMware processes, systems, and training in our organization to support IT ops.
Most valuable thing is that it's flexible. You can do anything with coding.
VMware needs to make it to where it is not as custom. Right now, you spend a lot of time making the services work. In order to get it up and running initially, that takes time. I would like it if they didn't require custom code and we could get it running out-of-the-box.
I have been using vRA for about five years.
Stability is pretty important. For example, if the platform goes down and people can't provision anymore, people are relying on the automation versus old manual processes.
Our developers and IT consumers use it as well as other infrastructure teams.
vRA is the means for 90 percent of our infrastructure requests. There are use cases where things, like big data or bare-metal, don't necessarily provision this type of stuff.
The service of VMware during our deployment was average; I wouldn't say VMware support is exceptional.
Post-deployment, it takes time to get to the right people in order to get proper support.
We did use a previous product, but integrating it with VMware was very custom.
There is complexity to the setup. You have to custom write code for any integrations. It took six months to make it end user ready.
There were about 10 of us involved in the setup. We have just a cloud team.
We have seen ROI from replacing manual processes with automation.
vRA has helped to automate deployment for developers. The solution increases developers’ responsibilities and productivity because now they can provision their own VMs and focus on the code.
The solution’s automated processes have reduced infrastructure provisioning time. Automation takes the time down to about an hour. Whereas, it could take days if it was done manually. This time reduction also applies to vRA's automated processes, which have reduced application provisioning time.
The solution has reduced time to market for our apps. It takes the burden off of our internal processes, which can now provision VMs in an automatic fashion.
It is pricey for what you get. Nutanix is cheaper.
We did not evaluate other options.
Make sure you give yourself enough time to implement or replace all your use cases as a business.
The solution requires specific expertise with it to be able to use it effectively.
I would rate this product as a seven (out of 10).
We use it for server deployments, typically. It's mostly for managing our own private cloud, for infrastructure-as-a-service deployments. It has performed well. We just recently went through an upgrade that had some hiccups to it, but it's been performing well for us.
It allows us to deploy servers on a much faster basis. Instead of deploying a VM from a template and going through the process of configuring that VM, with vRA we're able to click once and it does everything: grabs an IP, joins it to the domain, loads whatever configuration agents are needed. It does all of that without manual intervention.
It has definitely improved the speed of provisioning over the old-school way of deploying a VM from a template.
Ease of use, the GUI, is probably the best feature, so that really anybody can use it. You don't have to be technical to be able to deploy a VM.
I find it to be intuitive and user-friendly. Regarding some of the files that you feed it, you don't have to do a ton of development. You can feed it pretty standard configuration files. You don't have to be a developer, you don't have to know C# or Java or the like to get it going.
An improvement - and maybe this is already a feature that I don't know about - would be to be able to deploy to public cloud. Deployments to the public cloud would probably be a good feature if it's not already there, to be able to deploy to AWS or Azure, etc.
My impression of its stability is "middle of the road." We've had some issues where it seems to be a little bit sensitive, where deployments fail and we don't really know a specific reason why. We'll dig through logs and try to figure out what's going on, but it's not always apparent why it failed. And you can kick it off again and it'll succeed. So stability could be better.
The scalability is okay. You can't, to my knowledge - and I could be wrong - tell it to deploy like this: "I want 20 VMs all configured this way," and have it go ahead and spin them off. You have to do them one at a time. So, from a scalability standpoint that's not great, but it could also be that we're just not using it correctly. We don't actually have the need to do that very often, but from time to time we'll get a request such as, "We need five SQL Server VMs." It would be nice to be able to do it once and be done with it, rather than repeat that process five times.
To my knowledge, I don't think there was a previous solution.
I wasn't involved in the initial setup but we just went through an upgrade. It was not without its challenges. Some of the challenges were probably on our side, being able to support the newer infrastructure. But I seem to recall there being some issues importing some of the old settings and from vRA 6 into vRA 7 so that you could destroy VMs that were built in 6 from within the 7 UI. There were some challenges in getting that done. It's done, but I believe that there were some speed bumps to that.
I rate vRA at seven out of ten. There's some room for improvement, but it's better than the old way that we used to do things. It's a good product, it could just use some ironing out.
The most important criterion when selecting a vendor, to my mind, is support: a support network, whether it be knowledgebase articles online, forums online, or calling into actual, paid support.
These features serve as the most critical pieces for automating anything, not just state, but also execution and remediation.
I don’t want to build automation that just does a thing or two. I want to build automation that is intelligent, part of the fabric of our environment, and is somewhat self-sustaining. I think SaltStack can help me do this.
SaltStack provides the capability necessary to truly streamline our SDLC and environment management. From a high level, it allows coders to code, testers to test (automated testing too), and admins to admin in the most inter-connected and effective way possible.
We have been using this for three years.
There are some issues here and there, such as nuances with Windows and minions ‘falling asleep’, but its manageable.
I did not encounter any issues with scalability.
I would give technical support a rating of 8/10.
I was using more of a Frankenstein automation solution previously, and the reason for switching was the capability of SaltStack, performance, and ramp up time (ease of use).
The setup was pretty straightforward. It took some time getting familiar with all the configuration options and playing around with pillars and grains. On the whole, it was relatively easy to get going.
I think they are going to have a tough time with the Enterprise licensing. So much can be done with the Open Source side, and especially for smaller shops. I personally think the pricing for Enterprise is hard to justify.
We looked at Chef, Ansible, and Puppet.
Do it and take full advantage of its capability. Be creative and automate everything you can with it.