What is most valuable?
For CA Service Virtualization, it’s the ability to quickly prototype something. My guys really like the ability where they can do the recording session. It's a way for them to initialize for existing services where they need to get it up and running. The ability to have a listener and capture sample packets was a key thing that they really liked. It really helped us jump-start something. We could do something within an hour to have it up and initially runnin
Also, the ability to have the different back end connectivities, whether it's an Excel spreadsheet, or more complex things where we’re now linking the data sets and responses back together. Those have been a couple of the key areas for us and have been very beneficial. It’s also certainly a lot better than just doing stub code, because now I have templates that I can more readily reuse. That's better than just somebody who’s kind of building up the Java code to build stubs.
We keep running into situations where people will start building stubs and things like that, and we'll come back and show them the benefits of this solution. Once they start to see it in use, then they start to say “oh, okay, this is a lot better on that side.”
From a CA Release Automation perspective, it’s certainly the idea of being able to do the automated deployment. The challenges are that we started off down this path a few years ago, we purchases some licenses and have not taken full advantage of them to this point. We've had an ongoing challenge within our environment that stood up quickly.
Now we're getting a little bit more focused about this. I'm already doing some work on it and we’ll be doing more over the next few weeks. We’ll be looking at the Release Automation tools for coming up with the best processes for us to be able to have a repeatable process that quickly can deploy code without having to do a lot of manual steps. We want a good and clean workflow.
I think one of the things that we did appreciate was that changes were coming in to the product at a good cadence. We needed to support WebLogic, which was a big one for us at the time. Those things did come in, and we didn't have to wait a huge amount of time. I always felt like the product has been getting good updates to support us as we were doing some of those activities.
One of the things that we know and we're trying to work through is looking at when I go to set an environment up, maybe I don't quite have that new service yet, but I have other applications, my UI application, or whateveris ready to go, but I have this other middleware call that needs to be available. The idea that I can spin up or point myself to a virtualized service without one piece of it, but still use the rest of the end to end system, that's kind of one of those things that we would envision.
I'm doing a deployment of an environment / application, it's being configured, and then if I need to I'm going to use virtualized services for some or all. What we've been working on is how we can do a lot of that shift to the left by using service virtualization, so when we deploy we can at least get the development teams testers up and running on the application. Then we just have them virtualized on the back end. That might be how we would be setting up and configuring ourselves. It's definitely in that situation where you don't have a true end to end environment, but still need to be up and testing. That's where Service Virtualization would couple with the deployment in my book.
CA Test Data Manager is where there's one kind of end to end system and there has to be a system of data. A lot of times these call systems are hitting, and so having that model so that we can get it into the data, so when we deploy the software, you're going to bring this up. It may be our building data, which is an interesting challenge for us because it's a third party product, though they work very closely and have people on site with us.
Other systems that might need to have a configuration- there's all the types of users that are allowed into the system, or other types of price catalog information, whatever we want to model, that's where TDM is in your standing open environment, that's where you need to have something in place.
For some of our systems where we have to have data in it and available to product catalogs or something like that, then the TDM data can be very beneficial, and we can swap the data around, so we might be trying a new product catalog that's coming out, or new few features that are going to be offered, whereas we can also then go back to a production-like configuration as well.
How has it helped my organization?
When I originally bought Service Virtualization there were several things that we leveraged it for. One of them was that we had a merger acquisition, and we needed to interface the two systems together. So we needed to be able to share all the account information and those types of activities that were happening. There was going to be a set of middleware services that were going to be built on our end to allow the other system to communicate. Before that was ready, they wanted to start testing. We could use Service Virtualization in that case to let them start working with it before those middleware services were built. That was really beneficial. That's actually how I brought the product in in the first place.
The next thing that we really used it for was training. In the past, we tried to have training environments that were done different ways. They actually tried to have one application where they had two people dedicated to just trying to keep a full end to end environment up and functioning. They were never really able to do that. It kind of went by the wayside. It was expensive, and they just couldn't keep it in sync, and it was breaking. You've got enough different systems going that they couldn't fix all the issues. You had to go to either the development teams or somebody to help you out with how to figure out where the problem was. That wasn't real tenable.
Another way people have been trying to do it was they would build a simulator. One of them was a Flash-type simulator that tried to do it, but of course that would get out of date, and then they'd have to go back and spend a lot of money that way to try to update it. What we did is we built the services we needed to allow the applications to work in a training scenario so people could run those training scenarios when new sales people were coming in. They could run them with our call center folks and they have an opportunity to start working with the app ahead of time and not have to work with real data, per se.
We've had a situation where folks forgot to scrub the phone numbers. They took them straight from production, so a little bit to the TDM type of situation, and what happened was I got an e-mail shortly after we released one of our products, it had come all the way down from the CEO, people were trying to figure out what happened - a lady had called and said her daughter had had her phone number changed three times in the course of one week.
It turned out that people were actually using the real live system instead of the training environment, but they were reading the training documentation, ‘do this, put this phone number in.’ Those phone numbers were real phone numbers in this case, and we would treat it as one, or we'll call it real account numbers. People were changing those account numbers on the customer, and that wasn't so good. We quickly went to scrubbed account numbers, but that was one of those things, just one of those side effects where a TDM type of solution can become very important and helpful to make sure that you're running it through that kind of scrubbing process.
Those are a couple things, the training, the helping to get things going, and then the shift to the left. That's been a real benefit for us. We can get more testing in sooner and we actually had one project where I pushed us to use it because we couldn't get a stable environment, so I said let's virtualize those services. In doing that, we actually were able to allow the teams to do all their testing. By the time it got to our QA testing, I believe they found only one defect in the application on that.
The later in the cycle that you find defects, the more expensive it is, so finding those things upfront and doing it even without an environment that had full end to end capability, which was kind of always my point, was you may find some integration issues, but you can get so much of the functionality tested of an application with Service Virtualization if you've done a decent job versus just having to wait to get a full environment, especially when you don't have stable environments.
For Release Automation, obviously what we wanted is things that are repeatable so that we can do things faster, and it doesn't require the manual configuration because when you start to drive for faster cadence, you just have to rely on more and more automation. It has to be a known process, you implement it, and if you fix it once, it fixes it for all of your deployments. The other important thing here is that you must make sure that whatever processes you settle on, you're going to do it. Whether you're doing it in your Dev environment, your QA, your staging or your production, you want the same process everywhere so that if you find the problem, you can fix it once, and that fixes it for everybody. You’re always running a prod type environment as well, which just ensures much more when you get there.
A lot of companies including ourselves have been in a situation where what we do in production is distinctly different from any of the other environments, and that leads to a lot of extra resources being dedicated just to managing the production environments. Release Automation allows us to implement processes that can be reused through all the environments.
Even if you're not getting a full DevOps model in place yet, it’s critical that your production support people are working with you up front beginning with the development teams, which is how I actually started promoting the concepts. The concept of DevOps was because we needed to be able to really get in sync with what we wanted to do in production, as that was the end game. We needed people to be much closer together in alignment. That development was building things that were appropriate for production, provided the insights into the stability of the application, things like that. At the same time, we work on getting closer to a production-like environment all the way through.
What needs improvement?
There were things early where we couldn’t do a few things in Service Virtualization that have since been updated. The concern I have right now with Release Automation is the concept of what we call immutable objects, so I build it once in development, and then I move to all of my other environments.
My challenge is looking at this from a perspective of “I'm kind of looking at … do I look at containers, do I look at what the RA type of product provides?” I also have to consider being cloud ready. As I deploy into an environment, I want to make sure I have enough of a stable and a repeatable environment type of model where once it comes out of development, I know that everything is staying the same.
There’s different ways we can slice it, and depending on whether you do containers or a combination. That's some of the things I'm trying to understand more with Release Automation right now. I've been pretty happy with the product, I'm sure there are always things that we could ask for, but it's really done a nice job for us.
It's interesting that it still takes teams a little bit to wrap their heads around. They're so used to kind of stubbing. I feel like the product has been able to scale with us, I think more of my challenges are how best to handle the processes. For example, who should be responsible for virtualizing services? I think that generally, the best thing would be for the middleware folks who are building a new service. They should also be responsible for creating the template virtualized service, and you build it right up front so people can start using it. I think there's a part of it where people could then also modify and update data, so there could be other types of responses for their scenarios, and it doesn't have to be done by the middleware folks. We went with a model of a center of excellence where they know the key, and then we start trying to educate and train other teams to build out more. Initially, we were not to that spot with the middleware teams for a variety of reasons, so we had to rely on the individual teams to build the services.
There is what to be careful about - let's just say I have a customer lookup call, I really don't want to have to end up in a situation where two different UI teams who need to use the same customer lookup call did their own virtualizations of the same call. Those are some challenges for us that we've got to figure out. That's more procedural than it is limitations of the tool.
What was my experience with deployment of the solution?
We have had no issues with the deployment.
What do I think about the stability of the solution?
The performance one - we want to start hitting it a lot more frequently. We've used it for some of the performance testing, and even the development worked pretty well for us. Even with all the training environment things we've done, I don't think we've really bumped up against a usage issue yet and had to put another instance out or whatever else we had to do to opt the number of times a hit could come in. There have been some issues with Service Virtualization, but I would have to can ask the team for more specific details
What do I think about the scalability of the solution?
We just haven't ramped the Release Automation up high enough to know. Certainly the concepts of how it's designed seem good. With Service Virtualization the performance has definitely been handy in some spots for the kind of capability. Both of them have been scalable, I'm not aware of any real concerns regarding the scalability.
Which solution did I use previously and why did I switch?
We built a solution in-house that coupled with data, the configuration of endpoints and how the app needed to be configured in a given environment. We had it coupled with ServiceNow to do workflow, and then trigger the builds, but then we could also deploy, trigger the deployment of the Release Automation product and kind of integrate into that whole flow.
What about the implementation team?
There hasn't been an issue per se from Service Virtualization. There are areas that I want to dig into more – containers. The idea about how can I quickly decide what to do with Service Virtualization in the way of quickly getting it into an environment so that if I need to hook up to certain ones or certain applications because maybe that it's a third party that doesn't have a test environment, or whatever it need to be, maybe it's a credit check call or something like that to hook up, how can I quickly in spinning up an environment leverage the systems. That's something that we'll need to look at from a deployment to get that more integrated into our whole environment process.
Regarding Release Automation, I know that there were definitely some good strides and improvements that happened to that product from the first days of Nolio to where it is now. I know that especially before the WebLogic support, some of those capabilities.
For Service Virtualization, we went ahead and went with the CA recommendation of having some initial consulting to help us get started and help get the guys kind of going up to speed. What was interesting for us when we did this was the consulting guys come, and we were able partly because we had this merger and acquisition going on. There were some funds to be available where we might not have always been able, because CA rates are kind of up there.
It was kind of funny, we had already been working with the product for three or four weeks on our own, and we had already done some basic virtualization. I remember when the consulting team comes in, and the project manager says, "Well, normally, we'd be working with you just to start with how to get your environment set up and all that, but since you guys have already done all that, we can skip past all of that." We dove right into best practices and techniques, which I think the key is, and it's something that I have in my talks that I've done at CA World, I leverage developers to do my virtualization to start with, because what I talk about with developers are that they have a broader set of skills and bag of tricks that they can do.
If they look at a problem, and they have this, maybe they have some data set sitting there in some file, and they can't get it over into the other tool the way it is. They might run a little simple application to translate that data into a way the other tool recognizes. It’s the way that you think about design patterns and implementations. I had a team that started off with four people, and then we expanded to six at one point. We were leveraging some offshore resources as well. We were able to make headway quickly by just actually bringing in some fairly technically sound developer types in to do it, rather than people from a QA organization, where it can be a little bit more of a challenge because you don't necessarily have all those development skills and capabilities that a developer has.
I was able to get a good set of resources in and developers to work on it. One of the guys switched what his role was and became the lead of this effort. Because of all his skills and background, he got there quickly. We didn't have a lot of problem ramping up. I think CA did help us in the sense of jump starting it, but like I said, I firmly believe in the use of developers if you're building a center of excellence because they're just going to give you some of that robustness and richness in what you develop that is probably going to be more tailored for reusability. They're going to be keeping an eye on it, or at least that's what they should be doing from that perspective.
What's my experience with pricing, setup cost, and licensing?
I think there's some things about the licensing and for putting it onto every sever, that's one of the things that will definitely be an evaluation point as well as we look at solutions and how we might do things. If we have to have one of those, if we end up putting it onto every server, pricing will be a bit of a question.
What other advice do I have?
Where I see Service Virtualization and Release Automation coupled is at the point of some type of environment testing. I'm going to be doing testing, so I'm either doing my testing in development, or I'm doing my testing in QA. Those are the most obviously common places we're going to be doing that. There's integration testing, but I don't necessarily have all the systems readily available to me for function up and running. Or sometime we literally might have an outage while they're updating the middleware call, so maybe it's important for offshore to be getting their testing done.
The ability to swap in or hook in virtualized services so that you can keep your testing going is important, so when you're doing the deployment, you could have your setups configured such that you say whether or not you need to select a given endpoint to be pointing to a virtualized service or the live service. I think it's important that you know how that data plays together, and which systems or maybe there's a group of them. You can't simply change one service. Maybe I have to have all three of them virtualized, running all the virtualized ones in one situation because of the way the data works.
This whole idea that you don't necessarily have to have a full end to end environment to get people functioning is important.
What’s been the key is to get people embracing it are seeing what Service Virtualization can do for them.
Regarding the virtualization capabilities, I remember when I saw them and I thought I knew exactly how I was going to use it. Training was one of those things that I took to the business that actually triggered a statement by one of our sales operations people who said this changes how we hire people. That caught me off guard, I didn't expect it, but it made sense once we talked about it a little bit more.
I'm a software developer, I just want to get in and start working with it. I just wanted to get my hands on it. I want to work with it, touch it, and feel it.
When discussing changes in how we hire, we always had to think about all the type of training that they had to do. There's weeks of training as you're hiring people, but you don't always have the time to spend four or five weeks training people. The ability to have those training disciplines right there, and not having to maintain a whole end to end, which is almost impossible, and you're never up to date with that type of scenario, that's a huge win. Of all the products, I think Service Virtualization is the easiest one to sell and the easiest one to get your arms around as to why there's a benefit, and I was able to sell it quite readily into the company for those very reasons.
One of the things that we just have come across here is that you setup a training environment, but depending on how a company does their projects, if you're going to start something and put training in it, then you need to make sure you also stay committed to it so that it get enhanced. You need to work closer to the business, so that they have a sense of the value, and you need to make sure that it's factored into the project work. A lot of the time development teams just think about how to code new features but we also need to worry about operational, meaning whether I'm relying on my release automation tools. Questions like, how do I integrate, how's my apps being monitored, how easy is it to test them, if an issue comes up, how quickly can they detect it?
There's so many good things that Service Virtualization can do, and almost everybody suffers from the same thing with middleware teams who aren't responding quickly to them. You’re on different cadences of when things can be released, and just trying to have stable environments. Service Virtualization can help work around some of those challenges.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.