I would like Turbonomic to add more services, especially in the cloud area. I have already told them this. They can add Azure NetApp Files. They can add Azure Blob storage. They have already added Azure App service, but they can do more.
Senior Director of Middleware Hosting Technology at a financial services firm with 10,001+ employees
Real User
Top 20
2023-08-17T19:30:00Z
Aug 17, 2023
We're still evaluating the solution, so I don't know enough about what I don't know. They've done a lot over the years. I used Turbonomics six or seven years ago before IBM bought them. They've matured a lot since then.
Senior Manager Solution Architecture at a consultancy with 10,001+ employees
User
Top 20
2023-07-18T19:40:00Z
Jul 18, 2023
We don't use Turbonomic for FinOps and part of the reason is its cost reporting. The reporting could be much more robust and, if that were the case, I could pitch it for FinOps. You might say that's a weakness, but it's not what it's supposed to do. If it had the reporting, it would be a 10 out of 10.
Systems Engineer at a government with 201-500 employees
Real User
Top 20
2023-02-17T19:44:00Z
Feb 17, 2023
I do not like Turbonomic's new licensing model. The previous model was pretty straightforward, whereas the new model incorporates what most of the vendors are doing now with cores and utilization. Our pricing under the new model will go up quite a bit. Before, it was pretty straightforward, easy to understand, and reasonable.
Assistant Consultant at a tech services company with 10,001+ employees
Consultant
Top 20
2022-12-10T02:22:00Z
Dec 10, 2022
The automation area could be improved, and the generic reports are poor. We want more details in the analysis report from the application layer. The reports from the infrastructure layer are satisfactory, but Turbonomic won't provide much information if we dig down further than the application layer. I would like them to add some apps for physical device load resourcing and physical-to-virtual calculation. It gives excellent recommendations for the virtual layer but doesn't have the capabilities for physical-to-virtual analysis. Automated deployment is something else they could add. Some built-in automation features are helpful, but we aren't effectively using a few. We want a few more automated features, like autoscaling and automatic performance optimization testing would be useful.
Senior Systems Engineer at a university with 1,001-5,000 employees
Real User
2022-09-21T07:03:00Z
Sep 21, 2022
The management interface seems to be designed for high-resolution screens. Somebody with a smaller-resolution screen might not like the web interface. I run a 4K monitor on it, so everything fits on the screen. With a lower resolution like 1080, you need to scroll a lot. Everything is in smaller windows. It doesn't seem to be designed for smaller screens. When I change the resolution to 1080, I only see half of what I would on my big 4K monitor. It would be annoying to have to scroll to see the flow chart. They have a flow chart that goes top to bottom like a tree. On a lower resolution, it might be nice if that scrolls horizontally because it's long, narrow, and tall. It's only three icons wide, but it's 15 icons tall. I think it would be helpful to have the ability to change that for a smaller screen and customize the widget.
Advisory System Engineer at a insurance company with 1,001-5,000 employees
Real User
2022-04-06T17:53:00Z
Apr 6, 2022
The way it handles updates needs to be improved. That would be one of the areas I would focus on. I wish that the upgrades and updates were more easily accessible. Some of that is based on my environment and how my environment is set up. Due to the fact that we are in such a lockdown environment, I wish that it would be better or easier to perform the updates.
Director, Infrastructure, Wintel Engineering at a insurance company with 5,001-10,000 employees
Real User
2022-02-28T22:36:00Z
Feb 28, 2022
The reporting needs to be improved. It's important for us to know and be able to look back on what happened and why certain decisions were made, and we want to use a custom report for this. Between the different versions and releases, it seems that reporting fell by the wayside. It seems like there was more in the past than there is today, which has made it a little bit more of a challenge for us to capture some historical information.
It's tough to say how they could improve. They've done a lot better with their Kubernetes integration. If you'd asked me a year and a half ago, I would say that I think their Kubernetes integration needs work. They started with more of a focus on on-prem VMware virtual machines. I think it was called VMTurbo at one point. Their main goal was to help you with these virtual machines. Now they've pivoted to also supporting containers, cloud-native tools, and cloud resources. At first, it was a little hard because they had this terminology that didn't translate to cloud-native applications for the way that Kubernetes deploy things versus a virtual machine. I was left wondering if this was a Kubernetes resource but now, it's come a long way. I think they've improved our UX as far as Kubernetes goes. I'm interested in seeing what they do in the future and how they progress with future Kubernetes integration. I would say that's something they've improved on a lot.
Team Lead, Systems Engineering at a healthcare company with 5,001-10,000 employees
Real User
2021-07-15T16:56:00Z
Jul 15, 2021
The GUI and policy creation have room for improvement. There should be a better view of some of the numbers that are provided and easier to access. And policy creation should have it easier to identify groups.
Ict Infrastructure Team Cloud Engineer at a mining and metals company with 10,001+ employees
Real User
2021-05-10T20:46:00Z
May 10, 2021
There is an opportunity for improvement with some of Turbonomic's permissions internally for role-based access control. We would like the ability to come up with some customized permissions or scope permissions a bit differently than the product provides. We are trying to get broader use of the product within our teams globally. The only thing that is kind of making it hard for a mass global adoption, "How do we provide access to Turbonomic and give people the ability to do what they need to do without impacting others that might be using Turbonomic?" because we have a shared appliance. I also feel that that scenario that I'm describing is, in a way, somewhat unique to our organization. It might be something that some others may run into. But, predominantly, most organizations that use or adopt Turbonomic probably don't run into the concerns or scenarios that we're trying to overcome in terms of delegating permission access to multiple teams in Turbonomic.
Global IT Operations Manager at a insurance company with 501-1,000 employees
Real User
2021-03-31T06:40:00Z
Mar 31, 2021
One ask that I'm waiting for, now that they have the ability to make recommendations for disks, for volumes, and disk tiering, is all about consumption. For example, we have a lot of VMs now, and these VMs use a lot of disks. Some of these servers have 8 TB disks, but they're only being used for 200 GBs. That's a lot of money that we're wasting. In Azure, it's not what you're using. You purchase the whole 8 TB disk and you pay for it. It doesn't matter how much you're using. So something that I've asked for from Turbonomic is recommendations based on disk utilization. In the example of the 8 TB disk where only 200 GBs are being used, based on the history, there should be a recommendation like, "You can safely use a 500 GB disk." That would create a lot of savings. And we would have more of a success rate than with the disk tiering, at least in our case. Also, unfortunately, there is no support for cost optimization for networking.
Head of Enterprise Wide Technical Architecture / Enterprise Technology Specialist at a healthcare company with 5,001-10,000 employees
Real User
2021-03-30T22:26:00Z
Mar 30, 2021
After running this solution in production for a year, we may want a more granular approach to how we utilize the product because we are planning to use some of its metrics to feed into our financial system.
AVP Global Hosting Operations at a insurance company with 10,001+ employees
Real User
2020-12-29T10:56:00Z
Dec 29, 2020
It would be nice for them to have a way to do something with physical machines, but I know that is not their strength Thankfully, the majority of our environment is virtual, but it would be nice to see this type of technology across some other platforms. It would be nice to have capacity planning across physical machines.
They could add a few more reports. They could also be a bit more granular. While they have reports, sometimes it is hard to figure out what you are looking for just by looking at the date. They could update the look of the console. There are some manual issues. When it comes to forecasting dollar amounts, you have to put in all these inputs. Some of the questions they ask are a little outside of the realm that any engineer should be putting in. If they could streamline that, the solution would be much better.
There are some issues on that point of it providing us with a single platform that manages the full application stack. I think version 8 is going to solve a lot of those issues. Turbonomic version 6 doesn't delete anything. So, if I create a VM, then destroy the VM, Microsoft doesn't delete the disk. You have to go in and manually do that. Turbonomic will let you know that it's there and that it needs to be deleted, but it doesn't actually manually delete the disk. The inherent problem with that is, it will say, "This disc is costing you $200 a month." Then, I go in and delete it. Since this is being done outside of the Turbonomic environment, that savings isn't calculated in the overall savings because it's an action that was taken outside of Turbonomic. I believe with Turbonomic 8, that doesn't happen anymore. We are still saving the money, but we can't show it as easily. We have to take a screenshot of, "Hey, you're spending this much on a disk that isn't needed." We then take a screenshot after, and say, "Here is what you're spending your money on," and then do a subtraction to figure it out. So, there are some limitations. It is the same with the databases. If a database needs to be scaled up or scaled down, Turbonomic recommends an action. That has to be done manually outside of the Turbonomic environment. Those changes are also not calculated in the savings. So, it doesn't handle the stack 100 percent. However, with version 8 coming out, all of that will change. I would love to see Turbonomic analyze backup data. We have had people in the past put servers into daily full backups with seven-year retention and where the disk size is two terabytes. So, every single day, there is a two terabyte snapshot put into a Blob somewhere. I would love to see Turbonomic say, "Here are all your backups along with the age of them," to help us manage the savings by not having us spend so much on the storage in Azure. That would be huge. Resources, like IP addresses, are not being used on test IP addresses. With any of the devices that you would normally see attached to a server resource group, such as IP addresses, network cards, etc., you can say, "Look, public IP addresses cost $15 a month. So if you don't have a whole lot of money and a hundred IP addresses on a public IP sitting there not being used, you're talking $1500 a month YOY." That becomes quite a big chunk of money. I know that Turbonomic is looking at the lowest hanging fruit. That is not something worth developing for only $15 a month saving, but I would love to see Turbonomic sort of manage Azure fully versus just certain components. One thing that has always been a bit troublesome is that we want to look at lifetime savings. So, we want to say, "Okay, we installed this appliance in October 2018. We want to know how much money we have saved from 2018 until now." The date is in there. It is just not easy to get to. You have to call an API, which dumps JSON data. Then, you have to convert that to comma separated values first. After that, you can open an Excel spreadsheet, which has hundreds of rows and columns. You can find the data that you want and get to it, but it is just not easy. However, I believe there is a fix in version 8 to solve this problem. When we switch to version 8, we can't upgrade our appliance, because it's a new instance. What that means is we will lose all our historical data. This is a bummer for us because this company likes to look at lifetime savings. This means I have to keep my old appliance online, even though we're not using it for that data and I can't import that data into the new appliance. That is something that is kind of a big setback for us. I don't know about other companies and how it is being handled, but I know I will need to keep that old appliance online for about three years. It is unfortunate, but I see what Turbonomic did. They gave us so many new bells and whistles that they think probably people aren't going to care because they're so much more savings to be had. However, for our particular environment, people like to see lifetime savings. That sort of puts a damper on things because now I need to go back to the old appliance, pull the reports using an API in a messy way, and then go to the new appliance. I don't even know what I am going to get from that. I don't know if it's going to be the Excel spreadsheet or just a dashboard, then somehow combine the two. While we haven't experienced it yet, when we do upgrade, we'll experience that problem. We know it is coming.
Director of Enterprise Server Technology at a insurance company with 10,001+ employees
Real User
2020-12-03T05:52:00Z
Dec 3, 2020
For implementing the solution’s actions, we use scheduling for change windows and manual execution. The issue for us with the automation is we are considering starting to do the hot adds, but there are some problems with Windows Server 2019 and hot adds. It is a little buggy. So, if we turn that on with a cluster that has a lot of Windows 2019 Servers, then we would see a blue screen along with a lot of applications as well. Depending on what you are adding, cores or memory, it doesn't necessarily even take advantage of that at that moment. A reboot may be required, and we can't do that until later. So, that decreases the benefit of the real-time. For us, there is a lot of risk with real-time. You can't add resources to a server in the cloud. If you have an Azure VM, you can't go add two cores to it because it's not going to have enough processing power. You would have to actually rebuild that server on top of a new server image which is larger. They got certain sizes available, so instead of an M3, we can pick an M4, then I need to reboot the server and have it come back up on that new image. As an industry, we need to come up with a way to handle that without an outage. Part of that is just having cloud applications built properly, but we don't. That's a problem, but I don't know if there is a solution for it. That would be the ultimate thing that would help us the most: If we could automatically resize servers in the cloud with no downtime. The big thing is the integration with ServiceNow, so it's providing recommendations to configuration owners. So, if somebody owns a server, and it's doing a recommendation, I really don't want to see that recommendation. I want it to give that recommendation to the server owner, then have him either accept or decline that change control. Then, that change control takes place during the next maintenance window.
Advisory System Engineer at a insurance company with 1,001-5,000 employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
The planning and costing areas could be a little bit more detailed. When you have more than 2,000 machines, the reports don't work properly. They need to fix it so that the reports work when you use that many virtual machines.
Server Administrator at a logistics company with 1,001-5,000 employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
The way they evaluate reserved instances could use some polishing. The people that make decisions on what to buy are a bit confused by how it's laid out. I don't know if that's the fault of Turbonomic, or if that's just the complexity of reserved instances that Microsoft has created. It's not really that confusing for me, but for some people it's a little bit confusing. Trying to explain it to them is a bit tricky as well. We get to a point of impasse where we just accept that they don't really fully understand it, and that I can't fully explain it either. It would help if Turbonomic could simplify it or clarify it, and help non-technical people to understand what's going on and how the reserve instances are being calculated and what they apply to.
Principal Engineer at a insurance company with 10,001+ employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
There are a few things that we did notice. It does kind of seem to run away from itself a little bit. It does seem to have a mind of its own sometimes. It goes out there and just kind of goes crazy. There needs to be something that kind of throttles things back a little bit. I have personally seen where we've been working on things, then pulled servers out of the VMware cluster and found that Turbonomic was still trying to ship resources to and from that node. So, there has to be some kind of throttling or ability for it to not be so buggy in that area. Because we've pulled nodes out of a cluster into maintenance mode, then brought it back up, and it tried to put workloads on that outside of a cluster. There may be something that is available for this, but it seems very kludgy to me. I would like an easier to use interface for somebody like me, who just goes in there and needs to run simple things. Maybe that exists, but I don't know about it. Also, maybe I should be a bit more trained on it instead of depending on someone else to do it on my behalf. There are some things that probably could be made a little easier. I know that there is a lot of terminology in the application. Sometimes applications come up with their own weird terminology for things, and it seems to me that is what Turbonomic did.
System Engineer at a financial services firm with 201-500 employees
Real User
2020-11-08T07:00:00Z
Nov 8, 2020
On the infrastructure side, they've been doing it long enough. But until I get a better use case for the cloud, the only thing I can think of is that I'd like to see it work with SevOne, when you're doing true monitoring, so that the software packages work together. It would be good for Turbonomic, on their side, to integrate with other companies like AppDynamics or SolarWinds or other monitoring software. I feel that the actual monitoring of applications, mixed in with their abilities, would help. That would be the case wherever Turbonomic lacks the ability to monitor an application or in cases where applications are so customized that it's not going to be able to handle them. There is monitoring that you can do with scripting that you may not be able to do with Turbonomics. So if they were able to integrate better with third-party monitoring software—and obviously they can't do them all, but there are a few major companies that everybody uses—and find a way to hook into those a little bit more, the two could work together better.
* More Azure features are needed. They started with AWS and are now ramping up Azure features. * We would like to see more visibility into reserved instances in Azure.
None at the moment. The product is improving with each release and the new HTML5 interface is great. Would like to see the ability to move custom dashboards from the old interface to the new as that is not possible right now.
Based on the way we currently use the product I do not have any recommended improvements. However, we do not have may of the automated features configured at this time.
IBM Turbonomic is a performance and cost optimization platform for public, private, and hybrid clouds used by companies to assure application performance while eliminating inefficiencies by dynamically resourcing applications through automated actions.
IBM Turbonomic leverages AI to continuously analyze application resource consumption, deliver insights and dashboards, and make real-time adjustments. Common use cases include cloud cost optimization, cloud migration planning, data center...
The implementation could be enhanced.
I would like Turbonomic to add more services, especially in the cloud area. I have already told them this. They can add Azure NetApp Files. They can add Azure Blob storage. They have already added Azure App service, but they can do more.
We're still evaluating the solution, so I don't know enough about what I don't know. They've done a lot over the years. I used Turbonomics six or seven years ago before IBM bought them. They've matured a lot since then.
We don't use Turbonomic for FinOps and part of the reason is its cost reporting. The reporting could be much more robust and, if that were the case, I could pitch it for FinOps. You might say that's a weakness, but it's not what it's supposed to do. If it had the reporting, it would be a 10 out of 10.
I do not like Turbonomic's new licensing model. The previous model was pretty straightforward, whereas the new model incorporates what most of the vendors are doing now with cores and utilization. Our pricing under the new model will go up quite a bit. Before, it was pretty straightforward, easy to understand, and reasonable.
The automation area could be improved, and the generic reports are poor. We want more details in the analysis report from the application layer. The reports from the infrastructure layer are satisfactory, but Turbonomic won't provide much information if we dig down further than the application layer. I would like them to add some apps for physical device load resourcing and physical-to-virtual calculation. It gives excellent recommendations for the virtual layer but doesn't have the capabilities for physical-to-virtual analysis. Automated deployment is something else they could add. Some built-in automation features are helpful, but we aren't effectively using a few. We want a few more automated features, like autoscaling and automatic performance optimization testing would be useful.
The management interface seems to be designed for high-resolution screens. Somebody with a smaller-resolution screen might not like the web interface. I run a 4K monitor on it, so everything fits on the screen. With a lower resolution like 1080, you need to scroll a lot. Everything is in smaller windows. It doesn't seem to be designed for smaller screens. When I change the resolution to 1080, I only see half of what I would on my big 4K monitor. It would be annoying to have to scroll to see the flow chart. They have a flow chart that goes top to bottom like a tree. On a lower resolution, it might be nice if that scrolls horizontally because it's long, narrow, and tall. It's only three icons wide, but it's 15 icons tall. I think it would be helpful to have the ability to change that for a smaller screen and customize the widget.
The way it handles updates needs to be improved. That would be one of the areas I would focus on. I wish that the upgrades and updates were more easily accessible. Some of that is based on my environment and how my environment is set up. Due to the fact that we are in such a lockdown environment, I wish that it would be better or easier to perform the updates.
The reporting needs to be improved. It's important for us to know and be able to look back on what happened and why certain decisions were made, and we want to use a custom report for this. Between the different versions and releases, it seems that reporting fell by the wayside. It seems like there was more in the past than there is today, which has made it a little bit more of a challenge for us to capture some historical information.
It's tough to say how they could improve. They've done a lot better with their Kubernetes integration. If you'd asked me a year and a half ago, I would say that I think their Kubernetes integration needs work. They started with more of a focus on on-prem VMware virtual machines. I think it was called VMTurbo at one point. Their main goal was to help you with these virtual machines. Now they've pivoted to also supporting containers, cloud-native tools, and cloud resources. At first, it was a little hard because they had this terminology that didn't translate to cloud-native applications for the way that Kubernetes deploy things versus a virtual machine. I was left wondering if this was a Kubernetes resource but now, it's come a long way. I think they've improved our UX as far as Kubernetes goes. I'm interested in seeing what they do in the future and how they progress with future Kubernetes integration. I would say that's something they've improved on a lot.
The GUI and policy creation have room for improvement. There should be a better view of some of the numbers that are provided and easier to access. And policy creation should have it easier to identify groups.
There is an opportunity for improvement with some of Turbonomic's permissions internally for role-based access control. We would like the ability to come up with some customized permissions or scope permissions a bit differently than the product provides. We are trying to get broader use of the product within our teams globally. The only thing that is kind of making it hard for a mass global adoption, "How do we provide access to Turbonomic and give people the ability to do what they need to do without impacting others that might be using Turbonomic?" because we have a shared appliance. I also feel that that scenario that I'm describing is, in a way, somewhat unique to our organization. It might be something that some others may run into. But, predominantly, most organizations that use or adopt Turbonomic probably don't run into the concerns or scenarios that we're trying to overcome in terms of delegating permission access to multiple teams in Turbonomic.
One ask that I'm waiting for, now that they have the ability to make recommendations for disks, for volumes, and disk tiering, is all about consumption. For example, we have a lot of VMs now, and these VMs use a lot of disks. Some of these servers have 8 TB disks, but they're only being used for 200 GBs. That's a lot of money that we're wasting. In Azure, it's not what you're using. You purchase the whole 8 TB disk and you pay for it. It doesn't matter how much you're using. So something that I've asked for from Turbonomic is recommendations based on disk utilization. In the example of the 8 TB disk where only 200 GBs are being used, based on the history, there should be a recommendation like, "You can safely use a 500 GB disk." That would create a lot of savings. And we would have more of a success rate than with the disk tiering, at least in our case. Also, unfortunately, there is no support for cost optimization for networking.
After running this solution in production for a year, we may want a more granular approach to how we utilize the product because we are planning to use some of its metrics to feed into our financial system.
It would be nice for them to have a way to do something with physical machines, but I know that is not their strength Thankfully, the majority of our environment is virtual, but it would be nice to see this type of technology across some other platforms. It would be nice to have capacity planning across physical machines.
They could add a few more reports. They could also be a bit more granular. While they have reports, sometimes it is hard to figure out what you are looking for just by looking at the date. They could update the look of the console. There are some manual issues. When it comes to forecasting dollar amounts, you have to put in all these inputs. Some of the questions they ask are a little outside of the realm that any engineer should be putting in. If they could streamline that, the solution would be much better.
There are some issues on that point of it providing us with a single platform that manages the full application stack. I think version 8 is going to solve a lot of those issues. Turbonomic version 6 doesn't delete anything. So, if I create a VM, then destroy the VM, Microsoft doesn't delete the disk. You have to go in and manually do that. Turbonomic will let you know that it's there and that it needs to be deleted, but it doesn't actually manually delete the disk. The inherent problem with that is, it will say, "This disc is costing you $200 a month." Then, I go in and delete it. Since this is being done outside of the Turbonomic environment, that savings isn't calculated in the overall savings because it's an action that was taken outside of Turbonomic. I believe with Turbonomic 8, that doesn't happen anymore. We are still saving the money, but we can't show it as easily. We have to take a screenshot of, "Hey, you're spending this much on a disk that isn't needed." We then take a screenshot after, and say, "Here is what you're spending your money on," and then do a subtraction to figure it out. So, there are some limitations. It is the same with the databases. If a database needs to be scaled up or scaled down, Turbonomic recommends an action. That has to be done manually outside of the Turbonomic environment. Those changes are also not calculated in the savings. So, it doesn't handle the stack 100 percent. However, with version 8 coming out, all of that will change. I would love to see Turbonomic analyze backup data. We have had people in the past put servers into daily full backups with seven-year retention and where the disk size is two terabytes. So, every single day, there is a two terabyte snapshot put into a Blob somewhere. I would love to see Turbonomic say, "Here are all your backups along with the age of them," to help us manage the savings by not having us spend so much on the storage in Azure. That would be huge. Resources, like IP addresses, are not being used on test IP addresses. With any of the devices that you would normally see attached to a server resource group, such as IP addresses, network cards, etc., you can say, "Look, public IP addresses cost $15 a month. So if you don't have a whole lot of money and a hundred IP addresses on a public IP sitting there not being used, you're talking $1500 a month YOY." That becomes quite a big chunk of money. I know that Turbonomic is looking at the lowest hanging fruit. That is not something worth developing for only $15 a month saving, but I would love to see Turbonomic sort of manage Azure fully versus just certain components. One thing that has always been a bit troublesome is that we want to look at lifetime savings. So, we want to say, "Okay, we installed this appliance in October 2018. We want to know how much money we have saved from 2018 until now." The date is in there. It is just not easy to get to. You have to call an API, which dumps JSON data. Then, you have to convert that to comma separated values first. After that, you can open an Excel spreadsheet, which has hundreds of rows and columns. You can find the data that you want and get to it, but it is just not easy. However, I believe there is a fix in version 8 to solve this problem. When we switch to version 8, we can't upgrade our appliance, because it's a new instance. What that means is we will lose all our historical data. This is a bummer for us because this company likes to look at lifetime savings. This means I have to keep my old appliance online, even though we're not using it for that data and I can't import that data into the new appliance. That is something that is kind of a big setback for us. I don't know about other companies and how it is being handled, but I know I will need to keep that old appliance online for about three years. It is unfortunate, but I see what Turbonomic did. They gave us so many new bells and whistles that they think probably people aren't going to care because they're so much more savings to be had. However, for our particular environment, people like to see lifetime savings. That sort of puts a damper on things because now I need to go back to the old appliance, pull the reports using an API in a messy way, and then go to the new appliance. I don't even know what I am going to get from that. I don't know if it's going to be the Excel spreadsheet or just a dashboard, then somehow combine the two. While we haven't experienced it yet, when we do upgrade, we'll experience that problem. We know it is coming.
For implementing the solution’s actions, we use scheduling for change windows and manual execution. The issue for us with the automation is we are considering starting to do the hot adds, but there are some problems with Windows Server 2019 and hot adds. It is a little buggy. So, if we turn that on with a cluster that has a lot of Windows 2019 Servers, then we would see a blue screen along with a lot of applications as well. Depending on what you are adding, cores or memory, it doesn't necessarily even take advantage of that at that moment. A reboot may be required, and we can't do that until later. So, that decreases the benefit of the real-time. For us, there is a lot of risk with real-time. You can't add resources to a server in the cloud. If you have an Azure VM, you can't go add two cores to it because it's not going to have enough processing power. You would have to actually rebuild that server on top of a new server image which is larger. They got certain sizes available, so instead of an M3, we can pick an M4, then I need to reboot the server and have it come back up on that new image. As an industry, we need to come up with a way to handle that without an outage. Part of that is just having cloud applications built properly, but we don't. That's a problem, but I don't know if there is a solution for it. That would be the ultimate thing that would help us the most: If we could automatically resize servers in the cloud with no downtime. The big thing is the integration with ServiceNow, so it's providing recommendations to configuration owners. So, if somebody owns a server, and it's doing a recommendation, I really don't want to see that recommendation. I want it to give that recommendation to the server owner, then have him either accept or decline that change control. Then, that change control takes place during the next maintenance window.
The planning and costing areas could be a little bit more detailed. When you have more than 2,000 machines, the reports don't work properly. They need to fix it so that the reports work when you use that many virtual machines.
The way they evaluate reserved instances could use some polishing. The people that make decisions on what to buy are a bit confused by how it's laid out. I don't know if that's the fault of Turbonomic, or if that's just the complexity of reserved instances that Microsoft has created. It's not really that confusing for me, but for some people it's a little bit confusing. Trying to explain it to them is a bit tricky as well. We get to a point of impasse where we just accept that they don't really fully understand it, and that I can't fully explain it either. It would help if Turbonomic could simplify it or clarify it, and help non-technical people to understand what's going on and how the reserve instances are being calculated and what they apply to.
There are a few things that we did notice. It does kind of seem to run away from itself a little bit. It does seem to have a mind of its own sometimes. It goes out there and just kind of goes crazy. There needs to be something that kind of throttles things back a little bit. I have personally seen where we've been working on things, then pulled servers out of the VMware cluster and found that Turbonomic was still trying to ship resources to and from that node. So, there has to be some kind of throttling or ability for it to not be so buggy in that area. Because we've pulled nodes out of a cluster into maintenance mode, then brought it back up, and it tried to put workloads on that outside of a cluster. There may be something that is available for this, but it seems very kludgy to me. I would like an easier to use interface for somebody like me, who just goes in there and needs to run simple things. Maybe that exists, but I don't know about it. Also, maybe I should be a bit more trained on it instead of depending on someone else to do it on my behalf. There are some things that probably could be made a little easier. I know that there is a lot of terminology in the application. Sometimes applications come up with their own weird terminology for things, and it seems to me that is what Turbonomic did.
On the infrastructure side, they've been doing it long enough. But until I get a better use case for the cloud, the only thing I can think of is that I'd like to see it work with SevOne, when you're doing true monitoring, so that the software packages work together. It would be good for Turbonomic, on their side, to integrate with other companies like AppDynamics or SolarWinds or other monitoring software. I feel that the actual monitoring of applications, mixed in with their abilities, would help. That would be the case wherever Turbonomic lacks the ability to monitor an application or in cases where applications are so customized that it's not going to be able to handle them. There is monitoring that you can do with scripting that you may not be able to do with Turbonomics. So if they were able to integrate better with third-party monitoring software—and obviously they can't do them all, but there are a few major companies that everybody uses—and find a way to hook into those a little bit more, the two could work together better.
* More Azure features are needed. They started with AWS and are now ramping up Azure features. * We would like to see more visibility into reserved instances in Azure.
None at the moment. The product is improving with each release and the new HTML5 interface is great. Would like to see the ability to move custom dashboards from the old interface to the new as that is not possible right now.
Based on the way we currently use the product I do not have any recommended improvements. However, we do not have may of the automated features configured at this time.