We use the solution for scenarios where there is a sudden surge in user demand. For example, during major events on TV channels like Star Sports or National Geographic, the number of users can spike dramatically. Similarly, during year-end sales on platforms like Amazon, millions of people log in, causing a significant increase in server requests. Therefore, we rely on Auto Scaling to automatically manage these fluctuations by scaling the servers up when there is high demand and scaling down when the demand decreases.
Deputy General Manager at a tech services company with 10,001+ employees
Real User
Top 5
2024-04-15T07:04:39Z
Apr 15, 2024
Auto Scaling is a feature available within AWS Compute services. If my workload runs mostly in the daytime and the usage is low at night, I keep five instances in the day and one or two at night. During Christmas or other festive seasons, when the load is much more, we can enable the tool to scale the number of instances based on the load automatically.
We use the solution to optimize and balance the availability and cost of AWS. It has scaling policies that can be set according to our preferences. We evaluate how much CPU a project requires and whether the resources are being underutilized. We can reduce the project timelines and balance the workloads.
We use AWS Auto Scaling for scaling purposes, particularly to manage costs and meet client requirements for scaling up resources at specific times. It helps us optimize expenses and handle increased traffic by automatically scaling instances when needed.
We mostly integrate AWS Auto Scaling with CloudWatch monitoring and set a target CPU utilization for some devices. If our application CPU utilization is higher, we scale using the solution, which is very easy.
Find out what your peers are saying about Amazon Web Services (AWS), VMware, Splunk and others in Application Performance Monitoring (APM) and Observability. Updated: November 2024.
We use AWS Auto Scaling to manage load for instances, including factors such as increased traffic. It helps us monitor and set up alerts and scaling policies to manage the memory usage for infrastructure.
We use the solution to scale instances vertically or horizontally based on what is going on in the environment. It involves increasing the size of CPU utilization. We can decide at what time our instance must be up and running. We can set in our environment that when a session reaches 70 or 80, the tool must add another instance. When it goes down to less than 30, it can reduce the instance, too. It helps us to reduce costs in an environment.
The good thing about Autoscaling is that it provides the capacity to minimize downtime. So, it gives you the assurance of stability and robustness within your system.
We wanted a determined retail solution in Amazon. Over the course of using it, we have fifteen networks, and we're still using Kubernetes with our Amazon service. We have many specs for another environment. We can configure it easily in other environments using Amazon with Kubernetes scaling.
I, currently, have a large customer with more than 30 servers, which we provide APIs to their customers for online gaming. Their customers are divided into three regions, namely, Asia, Europe, and the rest of the world. If the three default servers required for each region reaches 50% capacity, new servers are automatically launched and the traffic is divided among them. We follow continuous integration or continuous deployment (CI/CD) practices. When all servers are working correctly, we create new servers, configure them, delete the old servers, and the new servers are immediately deployed.
We use the solution for scenarios where there is a sudden surge in user demand. For example, during major events on TV channels like Star Sports or National Geographic, the number of users can spike dramatically. Similarly, during year-end sales on platforms like Amazon, millions of people log in, causing a significant increase in server requests. Therefore, we rely on Auto Scaling to automatically manage these fluctuations by scaling the servers up when there is high demand and scaling down when the demand decreases.
I used the tool during my internship for a client.
Auto Scaling is a feature available within AWS Compute services. If my workload runs mostly in the daytime and the usage is low at night, I keep five instances in the day and one or two at night. During Christmas or other festive seasons, when the load is much more, we can enable the tool to scale the number of instances based on the load automatically.
We use the solution to optimize and balance the availability and cost of AWS. It has scaling policies that can be set according to our preferences. We evaluate how much CPU a project requires and whether the resources are being underutilized. We can reduce the project timelines and balance the workloads.
We use AWS Auto Scaling for scaling purposes, particularly to manage costs and meet client requirements for scaling up resources at specific times. It helps us optimize expenses and handle increased traffic by automatically scaling instances when needed.
We mostly integrate AWS Auto Scaling with CloudWatch monitoring and set a target CPU utilization for some devices. If our application CPU utilization is higher, we scale using the solution, which is very easy.
If we don't use Auto Scaling, then AWS will be much more expensive. It's part of the optimization.
We use AWS Auto Scaling to manage load for instances, including factors such as increased traffic. It helps us monitor and set up alerts and scaling policies to manage the memory usage for infrastructure.
We use the solution to scale instances vertically or horizontally based on what is going on in the environment. It involves increasing the size of CPU utilization. We can decide at what time our instance must be up and running. We can set in our environment that when a session reaches 70 or 80, the tool must add another instance. When it goes down to less than 30, it can reduce the instance, too. It helps us to reduce costs in an environment.
We use AWS Auto Scaling to define the number of instances depending on specific requirements.
The good thing about Autoscaling is that it provides the capacity to minimize downtime. So, it gives you the assurance of stability and robustness within your system.
We wanted a determined retail solution in Amazon. Over the course of using it, we have fifteen networks, and we're still using Kubernetes with our Amazon service. We have many specs for another environment. We can configure it easily in other environments using Amazon with Kubernetes scaling.
I, currently, have a large customer with more than 30 servers, which we provide APIs to their customers for online gaming. Their customers are divided into three regions, namely, Asia, Europe, and the rest of the world. If the three default servers required for each region reaches 50% capacity, new servers are automatically launched and the traffic is divided among them. We follow continuous integration or continuous deployment (CI/CD) practices. When all servers are working correctly, we create new servers, configure them, delete the old servers, and the new servers are immediately deployed.