The most valuable feature is that it reduces the dependency so that the down time of the environment is not a major cost. That cost can be used for something else like the cloud.
The important is thing is that Service Virtualization allows you, each and every individual, and each and every system, to be up at any time throughout the cycle.
That indirect cost impact is in the millions of dollars for any IT organization. The direct cause may be that the environment is down and that the tester is sitting for ten hours.
However, the indirect cost is that the end product is not achieving its release date. That is the business impact. If you look at the broader picture, the silos may be the Service Virtualization, but the end-to-end picture is that there is a huge impact, and that impact is actually priceless.
I cannot go and say that, "Okay, tomorrow I'm a mobile service. My apps are not going on mobile." So, I cannot calculate it, and I cannot write that particular dollar value, because I never know what it is.
For example, a sales platform may have the features that I have added, to give me a million dollar sale, or it may not. However, Service Virtualization allows you to release on time at least, so you can align with the strategy of your business.
It gives you granularity and transformation. My personal point of view is that Service Virtualization plays a major role in transforming your development, testing, and operating engineer into a business engineer.
The industry is moving towards Agile and DevOps. The most important thing is that each and every individual in your company is not thinking of themselves as individuals, but rather as if they are the end consumer, or running the business themselves.
Certain products have their own monopoly and they really don't want to share their market share. Service Virtualization enables you to share everything in terms of the protocol tier.
With a CRM system, you have all other systems integrated. You can virtualize and then the core system is CRM. And if you never virtualize, your own virtualization falls down. This is how it works.
I can use functional performer automation everywhere. However, the questions still remain of when and how I need to use it.
If I run a load test and I need a Service Virtualization, I can use the same asset. But when I'm using the same asset against the functional test, do I need to use it as is, or do I need to modify the workflow?
They must have that pace, but the product itself is up and coming as a new product in the market. Companies themselves are struggling to find out how they can align the technology with testing and developing practices, or where they can fit it into functional automation of performers or integration.
At the same time, there are a lot of emerging technologies coming onto the market. These other companies are saying that Service Virtualization leads to the first step, and then they realize that protocol leads to the fifth step.
The problem is that there is too much around. I hope that the company who manufactures a product will be more focused on how they can reduce this timeline distance by utilizing Service Virtualization.
If I have an in-house application in place, it allows me to pull information. If I can create an in-house environment manager, then I can go and buy some of the off-the-shelf products from the industry, and then I can do it.
This allows me to have that control of each Service Virtualization asset. It also tells me if somebody has done some sort of ideological testing, or ideological test flow. It will let me know how I can adopt that work flow and add my ideas. Alternatively, I can really go on his idea and tell the developer that the idea sounds great, but it should have been done another way.