We use IBM Turbonomic to automate our cloud operations, including monitoring, consolidating dashboards, and reporting. This helps us get a consolidated view of all customer spending into a single dashboard, allowing us to identify opportunities to improve their current spending.
Senior Director of Middleware Hosting Technology at a financial services firm with 10,001+ employees
Real User
Top 20
2023-08-17T19:30:00Z
Aug 17, 2023
Turbonomics helps us understand resource usage and enables us to make decisions about how to utilize those resources. For example, let's say you check your monthly water bill and see that it's up 25 percent, but you don't know why. It keeps going like that until you check your toilet and find that the seal is slightly broken. Every now and then, a tool will keep running when you're not using it. Turbonomics will identify when the toilet is running and fix it for you, so your bill goes up. It automatically makes the adjustments.
Senior Manager Solution Architecture at a consultancy with 10,001+ employees
User
Top 20
2023-07-18T19:40:00Z
Jul 18, 2023
Initially, our use case was to reduce cloud spend. But Turbonomic is much more than just a reduction-in-cloud-spend tool. As we went on, it became more about optimizing applications and making sure that they function as expected, while reducing the cost of cloud resources. It became a question of how we make applications function properly, at speed, with the best cost possible, and without creating any risk for the application itself.
Specialist at a pharma/biotech company with 10,001+ employees
Real User
Top 20
2023-06-29T19:37:00Z
Jun 29, 2023
We don't use the full functionality of Turbonomic because our company is subject to regulations around making those changes. Some of that functionality would require going through a change process. We've been using it more for heuristics and analysis on the right-sizing of our VMs and VMware.
Systems Engineer at a government with 201-500 employees
Real User
Top 20
2023-02-17T19:44:00Z
Feb 17, 2023
We use Turbonomic to gather information that we can archive and review when there are performance issues or other problems. We look at the statistical data and see what was going on at that particular time across the cluster and if there was an issue. I generally look at the underlying resources, IOP utilization, CPU, CPU-ready ballooning, and anything that might cause performance issues. Turbonomic is better than using the native vCenter to look at that and we didn't have vROps or anything.
Assistant Consultant at a tech services company with 10,001+ employees
Consultant
Top 20
2022-12-10T02:22:00Z
Dec 10, 2022
I belong to the on-premises team. We're a telecom company with a private and public cloud, but we don't use Turbonomic for the cloud infrastructure. We use Turbonomic for capacity forecasting and analysis. We do not use the solution as much on the application layer. We scan only the infrastructure. Turbonomic isn't providing any useful reports on the application layer. We did some application groupings, but it didn't help us because we didn't receive any application information. There are more than 50 Turbonomic users at the company, including admins and developers. There is a 10-person admin team, and the rest are end-users with limited access to see the reports on their machines.
Senior Systems Engineer at a university with 1,001-5,000 employees
Real User
2022-09-21T07:03:00Z
Sep 21, 2022
We use Turbonomic to evaluate all of our virtualized clusters. Initially, we were only using Turbonomic for our long-term VMware stacks. Now we are monitoring VMware ESXi 7 and Nutanix AHV stacks. On the server side, we have 400 VMs. We don't evaluate the VDI side because we have 1,100 seats, so it's too expensive. We made a special contract on ELA for vRealize Ops for VDI on that side. It wasn't horrible. It's just the bare minimum to show us if there's a problem in the stack. We mainly use Turbonomic as a heat map, but we aren't drilling down into the performance of individual applications like Kubernetes. That's Docker or Swarm, but we use other tools to monitor the transaction levels, etc., instead of Turbonomic. Turbonomic is our overall heat map in our NOC. We fire it up, and when we see a red flag, we dig into it, and off we go, but the basic application components do not have our Dockers linked to them. It's just mainly working on the surface of the virtualization stack itself. Our infrastructure is solid enough that I get a VDI call about every three weeks. My server farms are built like tanks, so it takes a lot to take them down. We can sleep well at night. Everything is on-prem. The only cloud solutions we use at the college are SaaS systems. We don't put much in the cloud because cloud environments are too vulnerable to hacks and exploits. We're going from a silo system to HCI — from 450 hard disks to hybrid flash. While we undergo a significant infrastructure change, we're using Turbonomic to watch VMware because it has aged, and our migration isn't happening the way we want. We will probably reevaluate when the next contract is up for Turbonomic instead. Once we switch to pure Nutanix, we will reevaluate Turbonomic. I will probably keep it because management is used to Turbonomic's reporting. That saves me much OPEX time building those reports out of Nutanix by hand. I've been here for 16 years, and my CIO has been here for 17 years. They're used to the reports we've been developing over the last decade. We developed them using VMTurbo. We set the standard with that first tool for reporting. We use Nutanix Prism Central to manage everything on the Nutanix side, but Turbonomic provides ancillary information that gives me a holistic view of reporting and more features that Prism Central doesn't cover. Turbonomic provides linkages, visual aids, graphs, charts, etc. I'm the one who uses it. It's up on my NOC screen. We log and monitor it pretty much every day. Then, once a month, it generates reports on its own. In that sense, it's used daily or monitored daily. We watch what it's reporting every day on the heat map. Regarding issues and such, maybe every couple of weeks we have something pop up that we look at.
Advisory System Engineer at a insurance company with 1,001-5,000 employees
Real User
2022-04-06T17:53:00Z
Apr 6, 2022
The product is looking at things in the cloud or in Azure and it gives us reports of things that it could possibly do in Azure, however, we mainly use it on-prem for our VMware environment. The use case for Turbonomic really began with us trying to reduce a lot of the costs, and a lot of the CPU, and RAM. We had an idea that we could possibly save some money, however, it was theoretical and something that we really couldn't put our hands on or touch. Turbonomic was the solution that really gave us a tangible way of being able to see what we could do and to see those changes made in an efficient manner while also having the reports behind it to back up the changes. That, and the placement that it does in VMware, where it places machines where it best sees fit on different hosts, is how we use the product.
Director, Infrastructure, Wintel Engineering at a insurance company with 5,001-10,000 employees
Real User
2022-02-28T22:36:00Z
Feb 28, 2022
We use Turbonomic for workload placement. We've leveraged it for workload migrations, so if we get a new storage array or a new cluster, and we need to migrate workloads over to it, we can set up a policy and let it just run along as it can. It is especially valuable with storage array migrations, which can be very time-consuming if being done manually. The biggest thing that we leverage it for is the right-sizing of virtual servers. This is relevant for both hot-add, and during an improvement-maintenance window where resource reclamation of the virtual servers takes place.
We have a hybrid cloud setup that includes some on-prem resources, and then we have AWS as our primary cloud provider. We have one or two resources on the Google Cloud Platform, but we don't target those with Turbonomic. Our company has a couple of different teams using Turbonomic. Our on-premise VMware virtualization and Windows group use Turbonomic to manage our on-prem resources. They use it to make sure that they're the correct size. I'm on the cloud engineering team, and I use it in a unique way. We use it for right-sizing VMs in AWS. We're using it to improve performance efficiency in our Kubernetes containers and make sure the requests are in line with what they should be. If an application has way more memory allocated than it needs, Turbonomic helps us decide to scale that back. We have a platform that we use for our internal deployments. I use our API to get data and transform it for use in our platform. I've developed APIs that go in between our internal platform and Turbonomic. When our developers create and release code, these APIs allow them to take advantage of Turbonomic without using it directly. It's built into our platform so they can benefit from the performance improvements Turbonomic can recommend, but they don't need access to Turbonomic.
Ict Infrastructure Team Cloud Engineer at a mining and metals company with 10,001+ employees
Real User
2021-05-10T20:46:00Z
May 10, 2021
We primarily use it as a cost reduction tool regarding our cloud spend in Azure, as far as performance optimization or awareness. We use Turbonomic to identify opportunities where we can optimize our environments from a cost perspective, leveraging the utilization metrics to validate resources are right-sized correctly to avoid overprovisioning of public cloud workloads. We also use Turbonomic to identify workloads that require additional resources to avoid performance constraints. We use the tools to assist in the orchestration of Turbonomic generated decisions so we can incorporate those decisions through automation policies, which allow us to alleviate long man-hours of having someone be available after hours or on a weekend to actually perform an action. The decisions from those actions are scheduled in the majority of cases at a specific date and time. They are executed without having anyone standing by to click a button. Some of those automated orchestrations are performed automatically without us having to even review the decision, based on some constraints that we have configured. So, the tool identifies the resource that has a decision identified to either address a performance issue or take a cost saving optimization, then it will automatically implement that decision at the specific times that we may have defined within the business to minimize impact as much as possible. There are some cases where we might have to take a quick look at them manually and see if it makes sense to implement that action at a specific date and time. We then place the recommendation into a schedule that orchestrates the automation so we are not tying up essential IT people to take those actions. We take these actions for our public cloud offering within Azure. We don't use it so much for on-prem workloads. We don't have any other public cloud offerings, like AWS or GCP. We do have it monitor our on-prem workloads, but we do not really have much of an interest in the on-prem because we're in the process of a lift and shift migration for removing all workloads in the cloud. So, we are not really doing too much with the on-prem stuff. We do use it for some migration planning and cost optimization to see what the workload would look like once we migrated into the cloud. From our on-prem perspective, we do use it for some of the migration planning and cost planning. However,& most of our implementations with this are for optimization and performance into the public cloud. It provides application metrics and estimates the impact of taking a suggested action from two aspects: * It shows you what that impact is from the financial aspect in a public cloud offering. So, it will show you if that action will end up costing you more money or saving you money. Then, it also will show you what that action will like from a performance and resource utilization perspective. It will tell you, "If you make the change, what that resource utilization consumption will look like from a percentage perspective, if you will be consuming more or less resources, and if you're going to have enough resource overhead for performance spikes." * It will give you the ability to forecast, but the utilization consumption's going to be in the future term. So, you can kind of gauge whether the action that you're taking now, e.g., how it's going to look and work for you in the long-term.
Global IT Operations Manager at a insurance company with 501-1,000 employees
Real User
2021-03-31T06:40:00Z
Mar 31, 2021
We use the Reserved Instances and the recommendations of sizing of our family types in Azure. We use it for cost optimization for our workloads there. We started with the on-prem solution, but then we went with the SaaS model. Now, Turbonomic handles the installation and the support of the appliances.
Head of Enterprise Wide Technical Architecture / Enterprise Technology Specialist at a healthcare company with 5,001-10,000 employees
Real User
2021-03-30T22:26:00Z
Mar 30, 2021
The primary use case is to optimize our environment. We will take our OpenShift environment and use Turbonomic to monitor the size of the pods, then determine where to place the pods as well. We will make recommendations from that perspective. Turbonomic is an excellent product as far as we are concerned for managing the pod sizes and determining the best sizing for those pods. Right now, our development staff prefer to maximize the size of their pods and requests in terms of memory and CPU, and that causes us to potentially run out of resources. We are managing the pods, their performance, and the utilization. It is more of a pod deployment model. Right now, we are monitoring the whole application as well as its allocation of resources, CPU, memory, etc. So, the application will be optimized and Turbonomic will help us optimize that sizing, because that is a problem right now. We will be deploying this solution across all our OpenShift platforms to manage our existing environment.
AVP Global Hosting Operations at a insurance company with 10,001+ employees
Real User
2020-12-29T10:56:00Z
Dec 29, 2020
We wanted the performance assurance because we have seasonal spikes in our volume. One of the use cases was making sure that we could adjust for seasonal spikes in volume. Another use case was taking a look at how we increase our density and make a more effective utilization of the assets that we have on the floor. The third use case was the planning, being able to adjust for mergers, acquisitions, divestitures, and quickly being able to separate out the infrastructure required to support that workload. We just upgraded and are using the latest on-prem version. We use Turbonomic for our on-prem hosting: servers, storage, and containers. We also use it in Azure. We are trying to use it across multiple hosting environments. The networking team is not really using it. Instead, I am there from a hosting standpoint, where the main focus is on servers and storage, then the linkage to applications with the resources that they are using.
It has a feature called "right-sizing". This makes sure that our virtual machines are sized properly so we don't have a lot of wasted resources, either too large or too small. This way, our machines function much better than they should.
There have been quite a few use cases, even some that were probably unintended. * Reduce our footprint and cost. It handled that perfectly. * Handle our RI purchasing, which is what we are in the process of doing now. * Automating shutdowns and startups so we can turn machines off when they are not being used. We have several machines in this category. We are going to continue to add more to it, once we go through some finalization. We are using it to delete unattached volumes to manage databases. The unintended use case was that we started looking at what else could we save. We realized that we had a ton of data in Blob Storage for backups. Turbonomic can't see that, but it brought it to light because we wanted to find a way to look at our overall spending. So, we have saved a bunch of money by reducing that footprint. It's on-prem, but we are in the process of moving into the cloud.
Director of Enterprise Server Technology at a insurance company with 10,001+ employees
Real User
2020-12-03T05:52:00Z
Dec 3, 2020
Our use case: Planning for sizing servers as we move them to the cloud. We use it as a substitute for VMware DRS. It does a much better job of leveling compute workload across an ESX cluster. We have a lot fewer issues with ready queue, etc. It is just a more sophisticated modeling tool for leveling VMs across an ESX infrastructure. It is hosted on-prem, but we're looking at their SaaS offering for reporting. We do some reporting with Power BI on-premise, and it's deployed to servers that we have in Azure and on-prem.
Advisory System Engineer at a insurance company with 1,001-5,000 employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
We're using it for placement automation. Turbonomic will look at the virtual machines that are on different hosts and it will say, "Hey, three are too many machines on this host. And this host doesn't have a lot of machines on it." It will place the virtual machines in a balanced way on different hosts and try to balance the hosts out as best it can. We're also using it for CPU and RAM addition and automation: Do we need to add more memory or take away memory in the environment or look at a machine to see if it is being used to the best capacity? We also use Turbonomic for planning. It takes a look at our environment and we can make plans, like if we want to put some of our environment into a cloud-based system, like Azure, it will tell us our costing. We use it for about 4,000 machines.
Server Administrator at a logistics company with 1,001-5,000 employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
The primary reason we initially got it was to help us to right-size all of our VMs, to make sure that they were the appropriate size for the amount that they were being used. That was the biggest push to get this, and we implemented it. We have also discovered that Turbonomic can automatically suspend virtual machines that were on a schedule. For example, in the afternoons and the evenings when a VM wasn't going to be used, it could just be shut down, so that we wouldn't be charged eight to 10 hours of compute time, per machine, that wasn't going to be used at all during that time. That's been pretty useful. We're also using it to help us determine the reserved instances that we need. We haven't purchased the reserved instances yet but we're using Turbonomic's suggested reserved instance purchasing algorithms to assist us in finding the right balance for the number of RIs that we want to purchase.
Principal Engineer at a insurance company with 10,001+ employees
Real User
2020-12-02T06:24:00Z
Dec 2, 2020
Currently, we're doing migrations for older versions of Windows, both in the Azure Cloud and on-prem in our VMware vCenters. We use this tool to do comparisons between the current and future workloads and what would they look like, based on the usage. So, it is kind of a rightsizing exercise or rightsizing, either downsizing or upsizing, depending on the requirements. We just put all that information into Turbonomic, and it builds us out a new VM, exactly the size that we need, based on the trending and analysis. Then, you can also put in some factors, saying, "Look, it was Windows 2008, and we're going to windows 2019, or whatever. We're going to grow the database by X amount." This tool helps you do some of the analysis in order for you to get the right size right out-of-the-box. We love that. I oversee a lot of stuff, so I don't really get an opportunity to go in there to point and click. We have people who do that. It is doing Azure Cloud and VMware. Turbonomic understands the resource relationships at each of these layers and the risks to performance for each. You can compartmentalize your most critical workloads to make sure that they are getting the required resources so the business can continue to run, especially when we get hit by a lot of work at once.
System Engineer at a financial services firm with 201-500 employees
Real User
2020-11-08T07:00:00Z
Nov 8, 2020
We pretty much use it only for load balancing between hosts. We're a payroll company and Turbonomic is really important for us from about November until March, each year, because our end-of-year processing increases our load by six to seven times. That's especially true in November and December when companies are running their last payrolls. If we're going to be losing any customers, they definitely have to finalize everything all at one shot. In addition, companies that pay out bonuses at the end of the year also have to be running all these extra payrolls. There are a slew of reasons for extra payrolls at that time of year. They may need to do some cleanup if they messed up something and didn't do so all year long. At that point, they have to do it before December 31st. And after December 31st is the beginning of tax preparation, so our systems are very heavily utilized. It does a great job year-round, but we're in a situation where we have plenty of resources during most of the year, but at year-end, depending on how busy it gets, it can overwhelm the systems if you're not careful, depending on where a VM sits, on which host.
Turbonomic provides insight and recommendations into our Azure cloud configurations designed to allow optimum efficiency and cost. By analyzing the load on demand, the product can sense when more resources are needed as well as when resources are being wasted.
Turbonomic keeps our cluster balanced and VMs running optimally. It shows us where in our environment that resources can be recovered as well as when extra is needed.
Turbonomics is hyper-visor agnostic, so we use it to monitor both our VMware and XenServer environments. It helps identify when a virtual machine needs additional resources or to be migrated to another host. The migration we allow to happen automatically. However, changing resources we look at the recommendation and may or may not take the suggestion for various reasons.
IBM Turbonomic is a performance and cost optimization platform for public, private, and hybrid clouds used by companies to assure application performance while eliminating inefficiencies by dynamically resourcing applications through automated actions.
IBM Turbonomic leverages AI to continuously analyze application resource consumption, deliver insights and dashboards, and make real-time adjustments. Common use cases include cloud cost optimization, cloud migration planning, data center...
We use IBM Turbonomic to automate our cloud operations, including monitoring, consolidating dashboards, and reporting. This helps us get a consolidated view of all customer spending into a single dashboard, allowing us to identify opportunities to improve their current spending.
I mostly provide it to my clients. There are multiple reasons why they would use it depending on the client's needs and their solution.
We typically use it for optimizing the performance and resource allocation of virtual machines.
Turbonomics helps us understand resource usage and enables us to make decisions about how to utilize those resources. For example, let's say you check your monthly water bill and see that it's up 25 percent, but you don't know why. It keeps going like that until you check your toilet and find that the seal is slightly broken. Every now and then, a tool will keep running when you're not using it. Turbonomics will identify when the toilet is running and fix it for you, so your bill goes up. It automatically makes the adjustments.
Initially, our use case was to reduce cloud spend. But Turbonomic is much more than just a reduction-in-cloud-spend tool. As we went on, it became more about optimizing applications and making sure that they function as expected, while reducing the cost of cloud resources. It became a question of how we make applications function properly, at speed, with the best cost possible, and without creating any risk for the application itself.
We don't use the full functionality of Turbonomic because our company is subject to regulations around making those changes. Some of that functionality would require going through a change process. We've been using it more for heuristics and analysis on the right-sizing of our VMs and VMware.
We use Turbonomic to gather information that we can archive and review when there are performance issues or other problems. We look at the statistical data and see what was going on at that particular time across the cluster and if there was an issue. I generally look at the underlying resources, IOP utilization, CPU, CPU-ready ballooning, and anything that might cause performance issues. Turbonomic is better than using the native vCenter to look at that and we didn't have vROps or anything.
I belong to the on-premises team. We're a telecom company with a private and public cloud, but we don't use Turbonomic for the cloud infrastructure. We use Turbonomic for capacity forecasting and analysis. We do not use the solution as much on the application layer. We scan only the infrastructure. Turbonomic isn't providing any useful reports on the application layer. We did some application groupings, but it didn't help us because we didn't receive any application information. There are more than 50 Turbonomic users at the company, including admins and developers. There is a 10-person admin team, and the rest are end-users with limited access to see the reports on their machines.
We use Turbonomic to evaluate all of our virtualized clusters. Initially, we were only using Turbonomic for our long-term VMware stacks. Now we are monitoring VMware ESXi 7 and Nutanix AHV stacks. On the server side, we have 400 VMs. We don't evaluate the VDI side because we have 1,100 seats, so it's too expensive. We made a special contract on ELA for vRealize Ops for VDI on that side. It wasn't horrible. It's just the bare minimum to show us if there's a problem in the stack. We mainly use Turbonomic as a heat map, but we aren't drilling down into the performance of individual applications like Kubernetes. That's Docker or Swarm, but we use other tools to monitor the transaction levels, etc., instead of Turbonomic. Turbonomic is our overall heat map in our NOC. We fire it up, and when we see a red flag, we dig into it, and off we go, but the basic application components do not have our Dockers linked to them. It's just mainly working on the surface of the virtualization stack itself. Our infrastructure is solid enough that I get a VDI call about every three weeks. My server farms are built like tanks, so it takes a lot to take them down. We can sleep well at night. Everything is on-prem. The only cloud solutions we use at the college are SaaS systems. We don't put much in the cloud because cloud environments are too vulnerable to hacks and exploits. We're going from a silo system to HCI — from 450 hard disks to hybrid flash. While we undergo a significant infrastructure change, we're using Turbonomic to watch VMware because it has aged, and our migration isn't happening the way we want. We will probably reevaluate when the next contract is up for Turbonomic instead. Once we switch to pure Nutanix, we will reevaluate Turbonomic. I will probably keep it because management is used to Turbonomic's reporting. That saves me much OPEX time building those reports out of Nutanix by hand. I've been here for 16 years, and my CIO has been here for 17 years. They're used to the reports we've been developing over the last decade. We developed them using VMTurbo. We set the standard with that first tool for reporting. We use Nutanix Prism Central to manage everything on the Nutanix side, but Turbonomic provides ancillary information that gives me a holistic view of reporting and more features that Prism Central doesn't cover. Turbonomic provides linkages, visual aids, graphs, charts, etc. I'm the one who uses it. It's up on my NOC screen. We log and monitor it pretty much every day. Then, once a month, it generates reports on its own. In that sense, it's used daily or monitored daily. We watch what it's reporting every day on the heat map. Regarding issues and such, maybe every couple of weeks we have something pop up that we look at.
The product is looking at things in the cloud or in Azure and it gives us reports of things that it could possibly do in Azure, however, we mainly use it on-prem for our VMware environment. The use case for Turbonomic really began with us trying to reduce a lot of the costs, and a lot of the CPU, and RAM. We had an idea that we could possibly save some money, however, it was theoretical and something that we really couldn't put our hands on or touch. Turbonomic was the solution that really gave us a tangible way of being able to see what we could do and to see those changes made in an efficient manner while also having the reports behind it to back up the changes. That, and the placement that it does in VMware, where it places machines where it best sees fit on different hosts, is how we use the product.
We use Turbonomic for workload placement. We've leveraged it for workload migrations, so if we get a new storage array or a new cluster, and we need to migrate workloads over to it, we can set up a policy and let it just run along as it can. It is especially valuable with storage array migrations, which can be very time-consuming if being done manually. The biggest thing that we leverage it for is the right-sizing of virtual servers. This is relevant for both hot-add, and during an improvement-maintenance window where resource reclamation of the virtual servers takes place.
We have a hybrid cloud setup that includes some on-prem resources, and then we have AWS as our primary cloud provider. We have one or two resources on the Google Cloud Platform, but we don't target those with Turbonomic. Our company has a couple of different teams using Turbonomic. Our on-premise VMware virtualization and Windows group use Turbonomic to manage our on-prem resources. They use it to make sure that they're the correct size. I'm on the cloud engineering team, and I use it in a unique way. We use it for right-sizing VMs in AWS. We're using it to improve performance efficiency in our Kubernetes containers and make sure the requests are in line with what they should be. If an application has way more memory allocated than it needs, Turbonomic helps us decide to scale that back. We have a platform that we use for our internal deployments. I use our API to get data and transform it for use in our platform. I've developed APIs that go in between our internal platform and Turbonomic. When our developers create and release code, these APIs allow them to take advantage of Turbonomic without using it directly. It's built into our platform so they can benefit from the performance improvements Turbonomic can recommend, but they don't need access to Turbonomic.
We do vMotion through VMware. We let Turbonomic control our vMotion. We do server rightsizing and capacity management with it.
We primarily use it as a cost reduction tool regarding our cloud spend in Azure, as far as performance optimization or awareness. We use Turbonomic to identify opportunities where we can optimize our environments from a cost perspective, leveraging the utilization metrics to validate resources are right-sized correctly to avoid overprovisioning of public cloud workloads. We also use Turbonomic to identify workloads that require additional resources to avoid performance constraints. We use the tools to assist in the orchestration of Turbonomic generated decisions so we can incorporate those decisions through automation policies, which allow us to alleviate long man-hours of having someone be available after hours or on a weekend to actually perform an action. The decisions from those actions are scheduled in the majority of cases at a specific date and time. They are executed without having anyone standing by to click a button. Some of those automated orchestrations are performed automatically without us having to even review the decision, based on some constraints that we have configured. So, the tool identifies the resource that has a decision identified to either address a performance issue or take a cost saving optimization, then it will automatically implement that decision at the specific times that we may have defined within the business to minimize impact as much as possible. There are some cases where we might have to take a quick look at them manually and see if it makes sense to implement that action at a specific date and time. We then place the recommendation into a schedule that orchestrates the automation so we are not tying up essential IT people to take those actions. We take these actions for our public cloud offering within Azure. We don't use it so much for on-prem workloads. We don't have any other public cloud offerings, like AWS or GCP. We do have it monitor our on-prem workloads, but we do not really have much of an interest in the on-prem because we're in the process of a lift and shift migration for removing all workloads in the cloud. So, we are not really doing too much with the on-prem stuff. We do use it for some migration planning and cost optimization to see what the workload would look like once we migrated into the cloud. From our on-prem perspective, we do use it for some of the migration planning and cost planning. However,& most of our implementations with this are for optimization and performance into the public cloud. It provides application metrics and estimates the impact of taking a suggested action from two aspects: * It shows you what that impact is from the financial aspect in a public cloud offering. So, it will show you if that action will end up costing you more money or saving you money. Then, it also will show you what that action will like from a performance and resource utilization perspective. It will tell you, "If you make the change, what that resource utilization consumption will look like from a percentage perspective, if you will be consuming more or less resources, and if you're going to have enough resource overhead for performance spikes." * It will give you the ability to forecast, but the utilization consumption's going to be in the future term. So, you can kind of gauge whether the action that you're taking now, e.g., how it's going to look and work for you in the long-term.
We use the Reserved Instances and the recommendations of sizing of our family types in Azure. We use it for cost optimization for our workloads there. We started with the on-prem solution, but then we went with the SaaS model. Now, Turbonomic handles the installation and the support of the appliances.
The primary use case is to optimize our environment. We will take our OpenShift environment and use Turbonomic to monitor the size of the pods, then determine where to place the pods as well. We will make recommendations from that perspective. Turbonomic is an excellent product as far as we are concerned for managing the pod sizes and determining the best sizing for those pods. Right now, our development staff prefer to maximize the size of their pods and requests in terms of memory and CPU, and that causes us to potentially run out of resources. We are managing the pods, their performance, and the utilization. It is more of a pod deployment model. Right now, we are monitoring the whole application as well as its allocation of resources, CPU, memory, etc. So, the application will be optimized and Turbonomic will help us optimize that sizing, because that is a problem right now. We will be deploying this solution across all our OpenShift platforms to manage our existing environment.
We wanted the performance assurance because we have seasonal spikes in our volume. One of the use cases was making sure that we could adjust for seasonal spikes in volume. Another use case was taking a look at how we increase our density and make a more effective utilization of the assets that we have on the floor. The third use case was the planning, being able to adjust for mergers, acquisitions, divestitures, and quickly being able to separate out the infrastructure required to support that workload. We just upgraded and are using the latest on-prem version. We use Turbonomic for our on-prem hosting: servers, storage, and containers. We also use it in Azure. We are trying to use it across multiple hosting environments. The networking team is not really using it. Instead, I am there from a hosting standpoint, where the main focus is on servers and storage, then the linkage to applications with the resources that they are using.
It has a feature called "right-sizing". This makes sure that our virtual machines are sized properly so we don't have a lot of wasted resources, either too large or too small. This way, our machines function much better than they should.
There have been quite a few use cases, even some that were probably unintended. * Reduce our footprint and cost. It handled that perfectly. * Handle our RI purchasing, which is what we are in the process of doing now. * Automating shutdowns and startups so we can turn machines off when they are not being used. We have several machines in this category. We are going to continue to add more to it, once we go through some finalization. We are using it to delete unattached volumes to manage databases. The unintended use case was that we started looking at what else could we save. We realized that we had a ton of data in Blob Storage for backups. Turbonomic can't see that, but it brought it to light because we wanted to find a way to look at our overall spending. So, we have saved a bunch of money by reducing that footprint. It's on-prem, but we are in the process of moving into the cloud.
Our use case: Planning for sizing servers as we move them to the cloud. We use it as a substitute for VMware DRS. It does a much better job of leveling compute workload across an ESX cluster. We have a lot fewer issues with ready queue, etc. It is just a more sophisticated modeling tool for leveling VMs across an ESX infrastructure. It is hosted on-prem, but we're looking at their SaaS offering for reporting. We do some reporting with Power BI on-premise, and it's deployed to servers that we have in Azure and on-prem.
We're using it for placement automation. Turbonomic will look at the virtual machines that are on different hosts and it will say, "Hey, three are too many machines on this host. And this host doesn't have a lot of machines on it." It will place the virtual machines in a balanced way on different hosts and try to balance the hosts out as best it can. We're also using it for CPU and RAM addition and automation: Do we need to add more memory or take away memory in the environment or look at a machine to see if it is being used to the best capacity? We also use Turbonomic for planning. It takes a look at our environment and we can make plans, like if we want to put some of our environment into a cloud-based system, like Azure, it will tell us our costing. We use it for about 4,000 machines.
The primary reason we initially got it was to help us to right-size all of our VMs, to make sure that they were the appropriate size for the amount that they were being used. That was the biggest push to get this, and we implemented it. We have also discovered that Turbonomic can automatically suspend virtual machines that were on a schedule. For example, in the afternoons and the evenings when a VM wasn't going to be used, it could just be shut down, so that we wouldn't be charged eight to 10 hours of compute time, per machine, that wasn't going to be used at all during that time. That's been pretty useful. We're also using it to help us determine the reserved instances that we need. We haven't purchased the reserved instances yet but we're using Turbonomic's suggested reserved instance purchasing algorithms to assist us in finding the right balance for the number of RIs that we want to purchase.
Currently, we're doing migrations for older versions of Windows, both in the Azure Cloud and on-prem in our VMware vCenters. We use this tool to do comparisons between the current and future workloads and what would they look like, based on the usage. So, it is kind of a rightsizing exercise or rightsizing, either downsizing or upsizing, depending on the requirements. We just put all that information into Turbonomic, and it builds us out a new VM, exactly the size that we need, based on the trending and analysis. Then, you can also put in some factors, saying, "Look, it was Windows 2008, and we're going to windows 2019, or whatever. We're going to grow the database by X amount." This tool helps you do some of the analysis in order for you to get the right size right out-of-the-box. We love that. I oversee a lot of stuff, so I don't really get an opportunity to go in there to point and click. We have people who do that. It is doing Azure Cloud and VMware. Turbonomic understands the resource relationships at each of these layers and the risks to performance for each. You can compartmentalize your most critical workloads to make sure that they are getting the required resources so the business can continue to run, especially when we get hit by a lot of work at once.
We pretty much use it only for load balancing between hosts. We're a payroll company and Turbonomic is really important for us from about November until March, each year, because our end-of-year processing increases our load by six to seven times. That's especially true in November and December when companies are running their last payrolls. If we're going to be losing any customers, they definitely have to finalize everything all at one shot. In addition, companies that pay out bonuses at the end of the year also have to be running all these extra payrolls. There are a slew of reasons for extra payrolls at that time of year. They may need to do some cleanup if they messed up something and didn't do so all year long. At that point, they have to do it before December 31st. And after December 31st is the beginning of tax preparation, so our systems are very heavily utilized. It does a great job year-round, but we're in a situation where we have plenty of resources during most of the year, but at year-end, depending on how busy it gets, it can overwhelm the systems if you're not careful, depending on where a VM sits, on which host.
Turbonomic provides insight and recommendations into our Azure cloud configurations designed to allow optimum efficiency and cost. By analyzing the load on demand, the product can sense when more resources are needed as well as when resources are being wasted.
Turbonomic keeps our cluster balanced and VMs running optimally. It shows us where in our environment that resources can be recovered as well as when extra is needed.
Turbonomics is hyper-visor agnostic, so we use it to monitor both our VMware and XenServer environments. It helps identify when a virtual machine needs additional resources or to be migrated to another host. The migration we allow to happen automatically. However, changing resources we look at the recommendation and may or may not take the suggestion for various reasons.