What is our primary use case?
I'm in ACI operations and the current use for Cisco ACI is to host the entire server farm and all the applications which are hosted in our data center, here in Qatar, and also in different locations.
How has it helped my organization?
Normally, when you're configuring your core switches and your normal switching fabric, like Nexus or any of the HP platforms, you configure VLANs. If you're dividing a switch, you configure a virtual device contact. Instead of this, you have different tenants for your different environments, different segments. And you have automation on top of it if you are running virtualization domains. It removes the traditional networking configuration and gives you complete control over your switching fabric from one controller.
Also, it has APIs. You can use REST APIs and you can have configuration already built in for your XML code or GSM files. You can push it using different tools like Postman. You can have different types of Python scripts and you can have these types of automation if you want to play with the API. It will provide faster provisioning of network and faster provisioning of your applications.
If you go for full automation, you can build your own tools. I have my own tools that I built in Python. If I want to configure EPG or interface, I configure some parameters on my script, it will push to ACI, and it will configure it.
In terms of time saved, any new provisioning of services or new applications will take less than one minute on. I gave one IP to my system team to configure the IP on the application and tag the EPG on the application data. It was just a matter of tagging.
What is most valuable?
Among the valuable features are the integration with VMM domains and their Layer 4 and Layer 7 devices, like device packages for F5, Palo Alto, and ASA.
We are also doing automation from ACI and we have integration with Azure. With the Azure stack integration we can have total automation. We can configure the EPGs from there, and we can configure load balancing functionalities from there as well. The most useful feature is that you don't need to configure anything on ACI itself. You can configure on Azure and it will provision your application. This is the highest level of automation in Microsoft.
In the second level of integration, you create the EPGs and the gateways on ACI yourself. Then, it will be configured on a SCVMM and you tag the VLANs there. It removes the hassle of configuring code groups and VLAN tags on the VMM, the virtualization domain, on the virtualization platform. You configure within ACI, and it will be visible there. It removes the networking administrative part from the system side, and you have complete control there.
You can also have microsegmentation. You can have isolation for a certain part of the EPGs.
In addition, you have a complete fabric you can connect to and you can have a static binding all over the fabric. You don't need to configure specific VLANs or run different cables. All of switches are connected to the spine, so you have complete reachability all over the fabric. You can have multi-tenancy. You can have multiple fabric configurations for different types of connectivity. You would not have this on normal switching fabric.
What needs improvement?
Where there is room for improvement from ACI is for Layer 2 and Layer 7 packages. Normally, when you're updating your ACI fabric or you're introducing new Layer 4 to Layer 7 devices and there are some constraints, there are some limitations. You need to check before you do it, as well as F5 load balances. When you are doing device packages you will not have the functionality of ASM. It's like WAF, web application firewalls. So you need to configure it manually. There is some room for improvement here.
The rest of it, for VMM domains, is improving. Cisco is introducing new features. I don't feel that it's unstable or it needs more improvement. But, for Layer 2 and Layer 7 packages, it still needs improvement. It needs quite a bit of work.
Currently, we are using it in our test lab for Layer 4 and Layer 7 services. We are not using it in production. We are using unmanaged Layer 4 and Layer 7 devices. We are not using complete device packages.
I'm looking forward to something called Cisco Tetration. I have never worked on it but it's there now. It will map everything: What type of ports are communicated through between users and applications and between applications. It will map that on ACI automatically, at the ACI contracts level and the application level. It's like a big-data platform. It will understand the application. It will understand the port requirements, the security requirements, and it will perform some types of automation. Right now, ACI is lacking this. There's some intelligence within it but not much.
For how long have I used the solution?
More than five years.
What do I think about the stability of the solution?
It's a very stable product in terms of switching fabric. It's quite reliable. It doesn't fail that much compared to other switching platforms. There are some things you need to be cautious of, like when you are configuring contracts. When you are configuring L4 and L7, you need to be aware of what type of configuration you're doing. Sometimes when you are configuring something which is third-party, not Cisco, you need to be aware of what the end result will be. So you need to do it in a test environment first, and then do it in production.
What do I think about the scalability of the solution?
In terms of scalability there is just one limitation. When you want the security rules and features to be applied on the application NIC level - on the virtual NIC level, on the network interface level, on the application itself, on the virtualization domain - you cannot do that. The application needs to reach via API so you can apply the security policy. Then the security policies will be applied and then it can talk to other applications. This is one thing that is missing on ACI. But you cannot say that it's actually missing because that's the overlay approach of SDN; it's not underlay like NSX.
How are customer service and support?
Technical support is quite mature. It's not bad as before. I'm the one person who has been working with ACI for a long time. Most engineers only have experience two or three years of experience with ACI. I have experience with ACI when it started from version 1.1. I have used more or less all the OS's. In the beginning, support was quite bad, but now it has improved notably. They have good engineers for the VMM. They have separate departments for separate things.
Response time is good, but it depends. If you are getting a call from the European or the American site the support is better. But if you get a call from the Indian site or from another site, it's not that mature yet.
Which solution did I use previously and why did I switch?
Currently, we don't have any other SDN solutions, but I have experience with SDN in NSX. I have certification in VCIX, VCIX-NSX, and NV - network virtualization - from VMware.
The biggest difference is that NSX is running on compute. It's running on the hypervisor level. But ACI is running as an overlay, on a switching overlay fabric. This is the major difference. In NSX you can put policies closer to the application on the NIC level, but on ACI you have a constraint that you need to reach the fabric to have security policies apply.
How was the initial setup?
The last setup I did was a freelance project in Dubai for Emaar. I also did one of the biggest projects here in Qatar for our company. I did one extension project at Qatar University. I have also done some document evaluation and design evaluation for a project that didn't start because of some budget constraints. It's still not completed. They are still evaluating, but I did the design evaluation from the vendor side.
In general, the setup is a little bit complex, but it will remove future complexity. In the beginning, for newcomers, for new engineers, it's a little complex. Even for me, when I was learning it, was a little bit harder for me because it doesn't have conventional switching. It's running multiple types of OS's inside the fabric, so that can cause a little bit of confusion. But, after some time, you will feel like it's more logical.
The deployment time depends on how many leaves there are and how many fabric spine switches there are and on how many applications there are. If it's migration, it takes more time. If it's a greenfield project, it will not take that much time.
I did one deployment that was a complete greenfield project. There was nothing there. There was no migration. They are building a new data center and it was a small setup. It had six switches and two small, baby spine switches. That took less than one month.
Regarding implementation strategy there are two types of approaches. There is network-centric and there's object-oriented-centric. If it's network-centric, each VLAN has its own bridge domain. But if you have a complete application-centric approach, you have one BD for everything and you can configure multiple gateways there. You will specify contracts.
The number of staff required for a deployment depends on the fabric, the leaves and spines. Deployment generally takes two or three guys. For the configuration, I'm the only one. I can do it, no problem. But for physical stacking and connectivity, it takes a number of people. For configuration, one person is more than enough.
We have plans to increase usage. We are extending our fabric all the time because we started with 14 leaves and we now have around 24 leaves. We're also planning to implement it in our DR5. All over the Middle East, there is huge demand for ACI because Cisco is pushing this platform for core data centers.
What was our ROI?
It decreases network provisioning time and application provisioning time. It also takes fewer resources to manage it. You don't need a number of consultants to manage the ACI fabric because it's a centralized system. You will have one APIC controller which can manage more than 200 leaf switches. It depends on the APIC sizing. You can have multiple switches connected to it and you can manage it.
What's my experience with pricing, setup cost, and licensing?
If you compare the licensing and total cost of ACI, it's cheaper than NSX because of the licensing fees. If you are going for full NSX features it will be too expensive, especially the next-generation firewalling feature.
What other advice do I have?
If somebody is planning to implement ACI, it's mostly because they want their network to be centralized and they want their network to be more organized. They want more efficient provisioning of networking and applications. By implementing ACI they will need fewer resources and will have reduced operations costs. They will have more flexibility over the network. They can have multiple types of automation on their fabric, instead of using normal switching fabric.
In terms of maintaining it, the operation is something else. It depends on the number of applications and their business criticality. You need to check if it's a 24-hour approach where you need two or three guys to have a rotation for shifts. Currently, we don't have shifts, and I'm the only one who is managing the ACI, but we have an on-call rotation. Sometimes I'm getting called, sometimes my colleagues are getting called and they are relaying the information to me. But as I built the fabric here, I set it up so that I don't need to come in urgently. Everything is redundant, everything is connected on a dual-switch basis. If one switch fails or there's a configuration issue, there will not be downtime.
We have about 3,000 end users. It's our core. All the applications are hosted there.
I would rate the solution at nine out of ten. I have very good experience with ACI. My major platform and my focus is on security and data centers. I'm pretty good with data center technology as it is one of my major points of focus. I have experience with different products, mostly Cisco security products, but I have had a good experience with ACI.
*Disclosure: I am a real user, and this review is based on my own experience and opinions.