Lead AI Engineer at a manufacturing company with 11-50 employees
Real User
Top 20
2024-07-25T13:12:00Z
Jul 25, 2024
We encountered version mismatch errors while using the product. It sometimes does not integrate well with other libraries in Python, which can be problematic. Additionally, it can be less intuitive when creating neural networks than PyTorch.
The versatility of the concept is undeniable, but it can pose a challenge for developers unfamiliar with machine learning. For newcomers to the field, the learning curve can be steep, often requiring about a year of dedicated effort. Real-time capability needs enhancement, particularly in dynamic environments. Additionally, community support is crucial, and it should be more robust and accessible for remote assistance.
There's a product called DMWay, and I worked with some data scientists who used it. It could output models to several different software languages. And TensorFlow, or TensorFlow Lite at least, outputs to C. For instance, adding packages to output to C# or JavaScript would be very useful.
It would be cool if TensorFlow could make it easier for companies like us to program for running it across different hyperscalers. Independent of the hardware platforms or cloud platforms, in this case, we would like to run TensorFlow across different hyperscalers. The hyperscaler platforms are Google, Amazon, and Microsoft, and yes, you can run TensorFlow on all three of them, but you need to make tweaks. From the perspective of an independent software supply and software development company, it would be so much easier if the solution would be pluggable across any infrastructure. We know that that's a tall order because the solution links to the different setups the hyperscalers have. Hardware is not completely and equally set up. If TensorFlow could provide some layer that would make that easy, that would benefit us. Back in 2013-14, though it's now no longer a challenge nor a commercial issue, we were working with the hardware division of IBM. The combination of IBM processors and NVIDIA processors reacted differently to the processor combination of Intel and NVIDIA, which was on the cloud. This meant that we, as an ISV, had to get the solution to work on different hardware.
Sales Account Manager Southern Europe, MEA and Turkey at a computer software company with 51-200 employees
Real User
Top 10
2023-02-21T16:32:00Z
Feb 21, 2023
I would love to have a user interface like a programming interface. You need to have a set of menus where you can put things together in a graphical interface. The complete automation of the integration of the modules would also be interesting.
Data Science Lead at a mining and metals company with 10,001+ employees
Real User
2022-08-04T20:54:14Z
Aug 4, 2022
TensorFlow deep learning takes a lot of computation power. The more systems you can use, the easier it is. That's a good ability, if you can make a system run immediately at the same time on the same task, it's much faster rather than you having one system running which is slower. Running systems in parallel is a complex situation, but it can improve. There is a lot of work involved. There is a learning curve to using this solution well.
There are a lot of problems, such as integrating our custom code. In my experience model tuning has been a bit difficult to edit and tune the graph model for best performance. We have to go into the model but we do not have a model viewer for quick access. There should be better integration and standardization with different operating systems. We need to always convert from one model to another and there is not a single standardized model output that we could use on different platforms, such as Intel x56, x64 based, AR-based, or Apple M1 chips.
Data Scientist at a university with 5,001-10,000 employees
Real User
2021-03-29T23:53:31Z
Mar 29, 2021
It would be nice to have more pre-trained models that we can utilize within layers. I utilize a Mac, and I am unable to utilize AMD GPUs. That's something that I would definitely be like to be able to access within TensorFlow since most of it is with CUDA ML. This only matters for local machines because, in Azure, you can just access any GPU you want from the cloud. It doesn't really matter, but the clients that I work with don't have cloud accounts, or they don't want to utilize that or spend the money. They all see it as too expensive and want to know what they can do on their local machines.
I tend to find it to be a bit too much orientated to AI itself for other use cases, which is fine — that's what it's designed for. Personally, I find it to be a bit too much AI-oriented.
Overall, the solution has been quite helpful. I can't recall missing any features when I was using it. I know this is out of the scope of TensorFlow, however, every time I've sent a request, I had to renew the model into RAM and they didn't make that prediction or inference. This makes the point for the request that much longer. If they could provide anything to help in this part, it will be very great.
Machine Learning Engineer, AI Consultant at intelligentbusiness.hu
Real User
Top 20
2020-12-07T16:25:36Z
Dec 7, 2020
I don't have too much experience with the dashboards in the solution, however, it's possible they could be improved. I need to have more experience in the security aspect of the solution. It could, however, always develop this area more. It would be nice if the solution was in Hungarian. I would like more Hungarian NAT models.
In terms of improvement, we always look for ways they can optimize the model, accelerate the speed and the accuracy, and how can we optimize with our different techniques. There are various techniques available in TensorFlow. Maintaining accuracy is an area they should work on. When there are more and more objects involved with the model, the models get confused. So maintaining the accuracy and speed with the number of classes is the biggest area for improvement. It is a major challenge that we are seeing right now and we are trying to solve the problem.
TensorFlow is primarily geared towards Python community at present. JavaScript is a different thing and all the websites, web apps and all the mobile apps are built-in JavaScript. JavaScript is the core of that. What can be improved with TensorFlow is how it can mix in. How the JavaScript developers can use TensorFlow. There's a huge gap currently. If you are a web developer, then using Machine Learning with TF is not as straightforward as using a regular Javascript library by reading its documentation. TensorFlow should provide a way to do that easily.
If I want to develop my own gradient descent, and I want to use the TensorFlow grading descent, but implement it in my own way, it can be difficult. However, if I want to change just one thing in the implementation of TensorFlow functions I have to copy everything that they wrote and change it manually if indeed it can be amended. This is really hard as it's written in C++ and has a lot of complications. But this feature, allowing you to write bespoke code to an implementation of TensorFlow would be really great. Another thing I think that TensorFlow would be much more optimized is to have better CPU versions. I know the problem with Python in general, it lets you only use one thread in the CPU. But even while using TensorFlow, it uses two threads. For example, if I have a high powered CPU, I cannot use it. For example with my laptop, I have a high-powered CPU and I'm using Ubuntu, but my GPU is not recognized. So I can use the CPU, but it's not really optimized for this purpose. Huge calculations require GPU's. I think that could be the second thing that could be optimized. I think TensorFlow 2 has huge improvements over TensorFlow 1. However, it would be really nice if we can actually somehow use the code written in TensorFlow 1, to incorporate it into TensorFlow 2. It generates a lot of errors and you have to change a lot of code and settings. What we can optimize is to actually have consistency between the versions. So TensorFlow 2 is actually a different product, to TensorFlow 1.
It doesn't allow for fast the proto-typing. So usually when we do proto-typing we will start with PyTorch and then once we have a good model that we trust, we convert it into TensorFlow. So definitely, TensorFlow is not very flexible.
TensorFlow is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is...
The process of creating models could be more user-friendly.
We encountered version mismatch errors while using the product. It sometimes does not integrate well with other libraries in Python, which can be problematic. Additionally, it can be less intuitive when creating neural networks than PyTorch.
The versatility of the concept is undeniable, but it can pose a challenge for developers unfamiliar with machine learning. For newcomers to the field, the learning curve can be steep, often requiring about a year of dedicated effort. Real-time capability needs enhancement, particularly in dynamic environments. Additionally, community support is crucial, and it should be more robust and accessible for remote assistance.
The solution is hard to integrate with the GPUs.
There's a product called DMWay, and I worked with some data scientists who used it. It could output models to several different software languages. And TensorFlow, or TensorFlow Lite at least, outputs to C. For instance, adding packages to output to C# or JavaScript would be very useful.
It would be cool if TensorFlow could make it easier for companies like us to program for running it across different hyperscalers. Independent of the hardware platforms or cloud platforms, in this case, we would like to run TensorFlow across different hyperscalers. The hyperscaler platforms are Google, Amazon, and Microsoft, and yes, you can run TensorFlow on all three of them, but you need to make tweaks. From the perspective of an independent software supply and software development company, it would be so much easier if the solution would be pluggable across any infrastructure. We know that that's a tall order because the solution links to the different setups the hyperscalers have. Hardware is not completely and equally set up. If TensorFlow could provide some layer that would make that easy, that would benefit us. Back in 2013-14, though it's now no longer a challenge nor a commercial issue, we were working with the hardware division of IBM. The combination of IBM processors and NVIDIA processors reacted differently to the processor combination of Intel and NVIDIA, which was on the cloud. This meant that we, as an ISV, had to get the solution to work on different hardware.
I would love to have a user interface like a programming interface. You need to have a set of menus where you can put things together in a graphical interface. The complete automation of the integration of the modules would also be interesting.
TensorFlow deep learning takes a lot of computation power. The more systems you can use, the easier it is. That's a good ability, if you can make a system run immediately at the same time on the same task, it's much faster rather than you having one system running which is slower. Running systems in parallel is a complex situation, but it can improve. There is a lot of work involved. There is a learning curve to using this solution well.
There are a lot of problems, such as integrating our custom code. In my experience model tuning has been a bit difficult to edit and tune the graph model for best performance. We have to go into the model but we do not have a model viewer for quick access. There should be better integration and standardization with different operating systems. We need to always convert from one model to another and there is not a single standardized model output that we could use on different platforms, such as Intel x56, x64 based, AR-based, or Apple M1 chips.
It would be nice to have more pre-trained models that we can utilize within layers. I utilize a Mac, and I am unable to utilize AMD GPUs. That's something that I would definitely be like to be able to access within TensorFlow since most of it is with CUDA ML. This only matters for local machines because, in Azure, you can just access any GPU you want from the cloud. It doesn't really matter, but the clients that I work with don't have cloud accounts, or they don't want to utilize that or spend the money. They all see it as too expensive and want to know what they can do on their local machines.
I tend to find it to be a bit too much orientated to AI itself for other use cases, which is fine — that's what it's designed for. Personally, I find it to be a bit too much AI-oriented.
Overall, the solution has been quite helpful. I can't recall missing any features when I was using it. I know this is out of the scope of TensorFlow, however, every time I've sent a request, I had to renew the model into RAM and they didn't make that prediction or inference. This makes the point for the request that much longer. If they could provide anything to help in this part, it will be very great.
I don't have too much experience with the dashboards in the solution, however, it's possible they could be improved. I need to have more experience in the security aspect of the solution. It could, however, always develop this area more. It would be nice if the solution was in Hungarian. I would like more Hungarian NAT models.
In terms of improvement, we always look for ways they can optimize the model, accelerate the speed and the accuracy, and how can we optimize with our different techniques. There are various techniques available in TensorFlow. Maintaining accuracy is an area they should work on. When there are more and more objects involved with the model, the models get confused. So maintaining the accuracy and speed with the number of classes is the biggest area for improvement. It is a major challenge that we are seeing right now and we are trying to solve the problem.
There are connection issues that interrupt the download needed for the data sets. We need to prepare them ourselves.
TensorFlow is primarily geared towards Python community at present. JavaScript is a different thing and all the websites, web apps and all the mobile apps are built-in JavaScript. JavaScript is the core of that. What can be improved with TensorFlow is how it can mix in. How the JavaScript developers can use TensorFlow. There's a huge gap currently. If you are a web developer, then using Machine Learning with TF is not as straightforward as using a regular Javascript library by reading its documentation. TensorFlow should provide a way to do that easily.
If I want to develop my own gradient descent, and I want to use the TensorFlow grading descent, but implement it in my own way, it can be difficult. However, if I want to change just one thing in the implementation of TensorFlow functions I have to copy everything that they wrote and change it manually if indeed it can be amended. This is really hard as it's written in C++ and has a lot of complications. But this feature, allowing you to write bespoke code to an implementation of TensorFlow would be really great. Another thing I think that TensorFlow would be much more optimized is to have better CPU versions. I know the problem with Python in general, it lets you only use one thread in the CPU. But even while using TensorFlow, it uses two threads. For example, if I have a high powered CPU, I cannot use it. For example with my laptop, I have a high-powered CPU and I'm using Ubuntu, but my GPU is not recognized. So I can use the CPU, but it's not really optimized for this purpose. Huge calculations require GPU's. I think that could be the second thing that could be optimized. I think TensorFlow 2 has huge improvements over TensorFlow 1. However, it would be really nice if we can actually somehow use the code written in TensorFlow 1, to incorporate it into TensorFlow 2. It generates a lot of errors and you have to change a lot of code and settings. What we can optimize is to actually have consistency between the versions. So TensorFlow 2 is actually a different product, to TensorFlow 1.
It doesn't allow for fast the proto-typing. So usually when we do proto-typing we will start with PyTorch and then once we have a good model that we trust, we convert it into TensorFlow. So definitely, TensorFlow is not very flexible.