We have factorization software in our product, and we use natural language processing. We have tested TensorFlow on NVIDIA chips for ten years and even had TensorFlow running on IBM Power chips before IBM and Google did. We use the open-source framework, and our primary use case for TensorFlow is pixel analysis of images, and image analytics.
We considered TensorFlow because we had a particular strategy for the hardware. We believe in a combination of central computing and edge computing, and what made TensorFlow so appealing to us is that you could run it on a cluster computer and on a mobile device, which means you can build an AI continuum using massive power in data centers on that cluster while having certain jobs performed at the edge. You can literally have the same code base work in the cluster and at the edge, providing a huge advantage. TensorFlow aims to create a homogeneous landscape all the way from the camera in the field, which has a processor behind it so that you can do all kinds of image processing in each of the cameras. And if you don't have 50,000 cameras, you can bring preprocessed data to the center because taking the raw data from the camera is impossible. It would also be impossible if you had 50,000 cameras.
It would be cool if TensorFlow could make it easier for companies like us to program for running it across different hyperscalers.
Independent of the hardware platforms or cloud platforms, in this case, we would like to run TensorFlow across different hyperscalers. The hyperscaler platforms are Google, Amazon, and Microsoft, and yes, you can run TensorFlow on all three of them, but you need to make tweaks. From the perspective of an independent software supply and software development company, it would be so much easier if the solution would be pluggable across any infrastructure.
We know that that's a tall order because the solution links to the different setups the hyperscalers have. Hardware is not completely and equally set up. If TensorFlow could provide some layer that would make that easy, that would benefit us.
Back in 2013-14, though it's now no longer a challenge nor a commercial issue, we were working with the hardware division of IBM. The combination of IBM processors and NVIDIA processors reacted differently to the processor combination of Intel and NVIDIA, which was on the cloud. This meant that we, as an ISV, had to get the solution to work on different hardware.
I've used TensorFlow for around ten years since my organization became an NVIDIA partner.
I rate TensorFlow an eight out of ten for its stability.
TensorFlow is scalable, and I rate it a nine out of ten on scalability. We have five people working on TensorFlow. They are development and research engineers.
When we needed it, we received excellent technical support.
We found the initial setup to be easy, but I'm not sure if that depends on one's technical staff. We are extremely lucky to have many highly versed people, so we find it easy.
We had people proficient in AI in 2013. We have people with previous experience in AI and high-performance computing. That's also why we could work with the IBM Power chip and get Tensor to run on it. We did that work ourselves.
The solution has been deployed single-handedly.
I rate TensorFlow's pricing a five out of ten.
We're also using Azure OpenAI. Not the SaaS OpenAI open to the public, but the version Microsoft offers to enterprises.
If, like us, you're interested in one AI framework that runs both on edge and central computing, then definitely look into TensorFlow. In other words, if you need AI to extend right into the edge devices, TensorFlow is super.
I rate TensorFlow an overall nine out of ten.