Even though the way that the developers program the topics for the Virtual Agent is very intuitive and flexible, there is some room for improvement. There are things that can be done visually, and there are also things that can be done through JavaScript. They can help the developers with ready-made structures or JavaScript structures or topics that can be integrated into the basic package. That would help a lot.
While interacting with the virtual agent, sometimes, in the middle of the conversation, the end-users wanted to go to a live agent because the natural language model (NLU) of ServiceNow couldn't understand what the end-user was saying. There were some occurrences where users would just opt for the live agent support, and the virtual agent would directly route them to a live agent. However, there were some ServiceNow bugs that we had reported. There was also a known error where if a virtual agent wanted to connect to a live agent, the live agent was not able to monitor whether the end-user was still there in the chat or not. If the end-user left the chat, the agent was not able to know that. The agents were just waiting for two or three hours, and the conversation was not ended by the system as well. So, the problem was that the conversation was not ending. An end-user left the chat, but the agent didn't know if the user was there on the other end or not. This problem was fixed in the San Diego release. The language or multilingual features where the machine was doing the whole thing was a little bit tricky. They have been improving the NLU models, but if we wanted to opt for a multilingual virtual agent, we had to train it for NLU properly. That was still a tricky part of ServiceNow. There were occurrences where the machine was not recognizing the topic correctly and not routing the users to the correct topic. It was specifically related to the Spanish language.
I am the owner and architect in the company with regard to permissions so everyone goes through me to manage their flows. If I could change the permissions so each department could see its own flows, it would simplify a lot in the process. It's something they should improve. I'd also like integration with AI search or any enterprise search machine. That would add such a strategic aspect because you'd no longer need to capture and understand everything and add a realm where you can have a data lake of content. Conversely, imagine if you could give cognitive capabilities to an existing search engine, where the machine might be able to generate reports. It's the next step toward AI ops.
Make it easy for employees and customers to resolve issues quickly and get what they need, when they need it, with an enterprise conversational experience powered by AI.
Even though the way that the developers program the topics for the Virtual Agent is very intuitive and flexible, there is some room for improvement. There are things that can be done visually, and there are also things that can be done through JavaScript. They can help the developers with ready-made structures or JavaScript structures or topics that can be integrated into the basic package. That would help a lot.
While interacting with the virtual agent, sometimes, in the middle of the conversation, the end-users wanted to go to a live agent because the natural language model (NLU) of ServiceNow couldn't understand what the end-user was saying. There were some occurrences where users would just opt for the live agent support, and the virtual agent would directly route them to a live agent. However, there were some ServiceNow bugs that we had reported. There was also a known error where if a virtual agent wanted to connect to a live agent, the live agent was not able to monitor whether the end-user was still there in the chat or not. If the end-user left the chat, the agent was not able to know that. The agents were just waiting for two or three hours, and the conversation was not ended by the system as well. So, the problem was that the conversation was not ending. An end-user left the chat, but the agent didn't know if the user was there on the other end or not. This problem was fixed in the San Diego release. The language or multilingual features where the machine was doing the whole thing was a little bit tricky. They have been improving the NLU models, but if we wanted to opt for a multilingual virtual agent, we had to train it for NLU properly. That was still a tricky part of ServiceNow. There were occurrences where the machine was not recognizing the topic correctly and not routing the users to the correct topic. It was specifically related to the Spanish language.
I am the owner and architect in the company with regard to permissions so everyone goes through me to manage their flows. If I could change the permissions so each department could see its own flows, it would simplify a lot in the process. It's something they should improve. I'd also like integration with AI search or any enterprise search machine. That would add such a strategic aspect because you'd no longer need to capture and understand everything and add a realm where you can have a data lake of content. Conversely, imagine if you could give cognitive capabilities to an existing search engine, where the machine might be able to generate reports. It's the next step toward AI ops.