Initially we were automating the regression suite for SAP ECC.
From there we moved into a web application called HVAC Partners, which is something that we've developed that is a type of customer portal. That application also connects to SAP, but it does some other things that don’t necessarily connect to SAP. It is a kind of front end for quotes and sales orders that go into SAP, but it's also reporting status of orders and status of warranty claims and the like for the customers.
From there we moved into the Middle East SAP ECC instance and automated their regression suite and, from there we rolled out S/4HANA for our service business. With S/4, SAP releases updates every quarter. Because the S/4HANA instance is in the public cloud we have two weeks, essentially, to regression test and test any new functionality. We started with the last release and we did about 70 percent of the testing with Worksoft. We also used the S/4 automation tool, which is more for unit testing so it's not as valuable as Worksoft. We're wrapping up that automation in about the next month and we'll be moving on to a European rollout of S/4. We'll just start working our way across Europe and those implementations.
We have it on a virtual server and we're using remote desktop access for the offshore automation engineers to access it.
We're using it with Fiori and it's working fine. We have integration in and out of S/4 to Salesforce.com so we also automated those. The test cases were end-to-end. We start in Salesforce, which is a web application, with, for example, a quote, and then it goes into S/4 and gets reviewed and approved. It then goes back to Salesforce with the approval and a sales order is entered that ends up going back to S/4. And then there's fulfillment, back and forth, and eventually billing and collections. We were able to do that whole automation with Worksoft, plugging into Salesforce as well as integrating to S/4 and doing the S/4 automation, back and forth. It's been incredibly useful. We saved something like 80 percent of the time it would have taken to manually test, using this tool.
In terms of using the Capture feature without knowledge of testing tools, we brought on some new support people. One of them is our web support person and she had no background in Worksoft. She's been using it to do all the initial captures for our HVAC Partners. She's been able to use it very easily. Our more experienced automation engineers will follow up, after she's done the Capture piece, and troubleshoot some of the stuff that she might not understand yet. They're working with her so that she does learn it. But she's been able to use it very easily.
Worksoft’s ability to build tests and reuse them is very good. We ended up obsoleting the tests and not using them with the other tool we used, whereas now, we rerun these, at a minimum, every month. We do that for a few reasons. One reason is to keep the health of the tests up. Suppose a material is obsoleted. The test that has that material in it is going to fail because it's going to say, "Material not found." Or suppose a customer is no longer a customer and he has been blocked or archived. We run the tests to make sure that the scripts don't need any changes. We also use them in case a process has changed. We're releasing changes to SAP about every two weeks: support tickets, enhancements, maintenance, etc. If a business process changes, then the automated test needs to change to reflect that change. Running them every month, at a minimum, helps make sure that everything is healthy.
The other reason is to identify anything in our quality system that could unintentionally impact other things that the programmers didn't realize. We've caught a couple of those in queue and they said, "Okay. I didn't mean to do that. I only meant to change this one thing,” but it changed all kinds of things and we were able to catch that before it went into production. So the reusability is fabulous if you create the tests properly: no hard-coding, and you’re using data tables to hold any of your field selections, and you're using good automation standards, so you create and consume your data. If you create it and consume it, when you rerun it, it does the whole thing again. You don't have to worry about finding a sales order that works, for example. You really have to create a logical test design to make it reusable but as long as you do that, it's very reusable.
It dramatically reduces the time we spend on testing. Before we started using this tool, everyone was pretty much doing testing manually and test events were taking from two to six weeks. What they did in two to six weeks, depending on the scope of what they were doing and how many people they had involved, we can usually do in one to two days.
The most dramatic was when we finished the Middle East automation. They were bringing up another company code and they wanted us to run regression testing on all of their current company codes, about seven of them. We completed it in about four days. The IT director came to us and said that it reduced their labor by 93%. “Quite frankly,” he said, "we would never have been able to do all of that testing. We would have had to engage a minimum of 28 people, and it would've taken them a minimum of eight weeks, and we still would not have been able to do all of the tests. We wouldn't have gotten them done." We were able to do it in a fraction of the time and with a broader scope than they would've been able to do. They would've done as much as they could and then they would have gone live and hoped for the best.
And we've also been able to use it for other things like certain recurring tasks that had been done manually. We had people who were manually monitoring Tidal jobs, which are batch jobs that have been scheduled to run. If a Tidal job fails, somebody has to go in and figure out why it failed and either restart it or fix it, and then rerun it. These are jobs like billing jobs and we automated them. They probably spend 15 minutes a week on billing jobs now, whereas we had somebody doing this about 12 hours a week. And then that person would have to send out an email to whomever the relevant person was saying, "Hey, check your batch job. This isn't running." They now spend about 15 minutes running it. It saves the emails to the users, documents the results in a spreadsheet, and puts it out to a SharePoint where the auditors can pull them any time they want. It was the same thing with monitoring the claims jobs. We've done a few things like that which have added to the value.
Automation using Certify has also saved testing time, big-time. As I said, the Middle East: 93 percent. For the S/4HANA project, what we did in three or four days, they had been taking two weeks and not getting through at all. With the release, you don't get to say to SAP, "Hey, testing is running behind, we need another week," because it's in the public cloud. Like it or not, they're going live. The drill is supposed to be: You test during week one and you remediate in week two and you go live that weekend. We got our stuff done, 70 percent of the work, in about three days, and it was our first time, out-of-the-gate, so it'll go easier with the next release. The rest of the team took the entire two weeks to do their 30 percent. And within the 30 percent they were doing, a lot of them were smaller tests. We were doing end-to-end tests that go through Salesforce and S/4, etc.
In terms of defects, the value is finding the defects prior to moving something into production. There are two I'm thinking of that we found in Mexico. One of them would've brought shipping to a halt and the other one would have brought receiving to a halt. If you shut down factories, even for a short period of time, there is this domino effect. The value of those finds is huge. And this wasn't even something that the guys making this change were testing. They were testing the piece that they changed, which was working. What they didn't realize is that they changed all items instead of just that subset. It was a minor goof in the programming. It was just too broad of a statement.
I started in IT about nine years ago and we did total manual testing. We would have defects in the high hundreds to 1,000 during the implementation testing. Now, we're probably under 100, so it's much lower. It could be that we're just getting better at implementations.