The product helps us to manage Docker easily using automation.
Software Architect at AIOPS group
Helps to automate Docker management
Pros and Cons
- "The solution simplified deployment, making it more automated. Previously, Docker required manual configuration, often done by developers on their computers. However, with Google Kubernetes Engine, automation extends to configuration, deployment, scalability, and viability, primarily originating from Docker rather than Kubernetes. Its most valuable feature is the ease of configuration."
- "The tool's configuration features need improvement."
What is our primary use case?
What is most valuable?
The solution simplified deployment, making it more automated. Previously, Docker required manual configuration, often done by developers on their computers. However, with Google Kubernetes Engine, automation extends to configuration, deployment, scalability, and viability, primarily originating from Docker rather than Kubernetes. Its most valuable feature is the ease of configuration.
What needs improvement?
The tool's configuration features need improvement.
For how long have I used the solution?
I have been using the product for two years.
Buyer's Guide
Google Kubernetes Engine
January 2025
Learn what your peers think about Google Kubernetes Engine. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
What do I think about the stability of the solution?
We had some stability issues in the past. I rate the tool's stability a nine out of ten.
What do I think about the scalability of the solution?
I rate the solution's scalability a ten out of ten. Google Kubernetes Engine has around 100-200 users in my company.
How are customer service and support?
Google's support is good and fast. It's available 24/7.
How was the initial setup?
It will take some time for someone to get used to it, and there's a learning curve that shouldn't be skipped or neglected. But then, things will start to click, and you'll notice that the product is easy to deploy. The deployment setups are readily available from Google or Microsoft. You need to configure them, which can be done with these scripts and by automating your CI/CD processes. It's all interconnected with CI/CD.
What about the implementation team?
Google Kubernetes Engine can be deployed in-house.
What's my experience with pricing, setup cost, and licensing?
The tool's licensing costs are yearly.
What other advice do I have?
The inter-system communication, including the ports used, is all described within Docker. The product manages these Docker pieces and builds the bigger picture.
We integrate it as part of our DevOps script. It's all connected, with actions for the desktop, the CD Engine, and deployment on managed Kubernetes instances on Google Cloud. It's all automated and works well together.
I rate the overall product a nine out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Performance Specialist at DKATALIS
Along with great scalability and reliability features, the tool also makes CI/CD implementation easy
Pros and Cons
- "I am satisfied with the stability offered by the solution."
- "The monitoring part requires some serious improvements in Google Kubernetes Engine, as it does not have very good monitoring consoles."
What is our primary use case?
In our company, the microservices are deployed into the containers in Google Kubernetes Engine.
What is most valuable?
Google Kubernetes Engine itself is quite a revolutionary concept. The scalability and reliability features of the product are fascinating, and they help a lot. With Google Kubernetes Engine, the area around deployment and the CI/CD implementation is easy.
What needs improvement?
The monitoring part requires some serious improvements in Google Kubernetes Engine, as it does not have very good monitoring consoles. If Google Kubernetes Engine comes up with monitoring consoles, that would be helpful.
During deployment, if the product provides users with an interactive kind of proper dashboard, which gives status or feedback on deployment, it will be helpful.
For how long have I used the solution?
I have been using Google Kubernetes Engine for around two years.
What do I think about the stability of the solution?
I am satisfied with the stability offered by the solution. Stability-wise, I rate the solution a nine out of ten.
What do I think about the scalability of the solution?
Google Kubernetes Engine is an industry leader. Scalability-wise, I rate the solution a nine out of ten.
More than 200 people in my company use the solution.
The use of the solution is increasing day by day in my company as Google Kubernetes Engine is the main platform we use for development. Our organization is rapidly growing, and new teams are being introduced.
How are customer service and support?
Though I have not contacted the solution's technical support, through our company's regular channels, we keep watch for regular updates. If there is any escalation or ticket that was raised, our company gets good support from Google. I rate the technical support an eight out of ten.
How would you rate customer service and support?
Positive
How was the initial setup?
I rate the product's initial setup phase a seven out of ten.
The solution is deployed on a hybrid cloud.
The time required to deploy the product depends on how many services you want to be deployed at a time in your environment. In general, deploying the product doesn't take much time since it can usually be done in ten to fifteen minutes. If everything goes smoothly during the deployment process, it won't take much time, and if something gets stuck, then it may require time for a person to deploy.
What's my experience with pricing, setup cost, and licensing?
Initially, Google Kubernetes Engine was a little bit cheaper, but now its prices have been increased compared to the pricing model and the features that are made available by its competitors.
What other advice do I have?
I recommend others try the solution and see if it suits and meets your budget. I recommend the solution to others because of the scalability, reliability, and ease of use it offers.
I rate the overall tool an eight out of ten.
Which deployment model are you using for this solution?
Hybrid Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Google Kubernetes Engine
January 2025
Learn what your peers think about Google Kubernetes Engine. Get advice and tips from experienced pros sharing their opinions. Updated: January 2025.
831,265 professionals have used our research since 2012.
Cloud Engineer at Freelancer
Provides various options for load balancing and allows for the automatic management of workloads
Pros and Cons
- "The initial setup is very easy. We can create our cluster using the command line, or using our console."
- "I would like to see the ability to create multiple notebook configurations."
What is our primary use case?
I'm using a different infrastructure-as-code engine, Terraform, to create Kubernetes clusters. I specify the machine type and memory requirements in my Terraform configuration, and Terraform sets up the network. With Google Kubernetes Engine (GKE), Google manages the Kubernetes control plane, so I only need to focus on creating and managing nodes. Currently, I'm creating pre-node Kubernetes clusters, including private clusters for security. Workloads can be deployed to GKE using YAML files or the Kubernetes CLI. To expose deployments to end users, I create load balancers. I use cluster autoscaling and HBA host port autoscaling to automatically maintain my workloads at the desired size. GKE also provides various options for load balancing, including ingress. QoS handles credentials using secret resources, and configuration is done using ConfigMaps. The main workflow is to create deployments, ports, services, secrets, and configuration maps.
What is most valuable?
Workloads are automatically manageable, and there's a cluster autoscaling option in Google Kubernetes Engine. It also supports HBA host port autoscaling, maintaining ports at the desired size. You can create a load balancer for different types of service access using ingress. QoS handles credentials with secret resources, and configuration is done through ConfigMaps.
So, autoscaling is the most valuable feature.
What needs improvement?
I would like to see the ability to create multiple notebook configurations. In a cluster, we can create multiple notebooks, which means multiple machine configurations. This would be better because if we have a job that requires high CPU, then we can have a notebook available for that job with a high CPU machine type.
And if we have a job that requires high memory, then we can have a notebook available for that job with a high memory machine type.
For how long have I used the solution?
I have experience using this solution. It's been six to seven months now.
What do I think about the stability of the solution?
Google Kubernetes Engine is very stable.
How are customer service and support?
There's no issue because if I face problems, I just Google it, and I find the solution.
Which solution did I use previously and why did I switch?
I have previously worked with Docker. I have created and deployed containers using Docker and Docker Hub.
GKE is a managed Kubernetes service that runs on Google Cloud Platform (GCP). It makes it easy to deploy and manage containerized applications on GCP.
How was the initial setup?
You can deploy workloads to GKE using YAML files or the Kubernetes CLI.
The initial setup is very easy.
What about the implementation team?
We can create our cluster using the command line or using our console.
First of all, you have to provide the name of your cluster. And you have to create your default notebook according to your workload. And if you have to provide, if the cluster is either private or public, and the other things that you need to add is like a cluster networking. The security section is also implemented. You have to create to mention if the cluster can be delectable. There's an option for specific, enable, and delete protection.
So, with all these configurations set up using the console or command line, you can either click to create or just hit the command, and your cluster will be deployed on your platform.
Google Kubernetes Engine requires some maintenance. However, most of the maintenance tasks are handled by Google Cloud. For example, Google Cloud will automatically patch the Kubernetes Engine nodes and apply security updates.
What's my experience with pricing, setup cost, and licensing?
Kubernetes is an open-source project, so there is no licensing cost. However, there are costs associated with running Kubernetes in the cloud, such as the cost of the compute resources and the cost of the managed service (if you are using a managed Kubernetes service like GKE).
Which other solutions did I evaluate?
I have worked with App Engine and Cloud Functions. I recently learned about the Data Flow service, which allows you to move data from one source to another in real-time or batch mode. For example, you could use it to count the number of times each word appears in a textbook. You can save the results of your data flow to a Cloud Storage bucket.
Dataflow is a powerful tool for processing large amounts of data. You can also use Dataflow to save your results, such as text or documents, to a cloud storage bucket.
When you run a Dataflow job, Dataflow will process the data from your source, such as a Cloud Storage bucket, and store the results in a bucket that you specify. If you have a real-time data processing need, such as tracking the location of a taxi, you can also use Dataflow to create a real-time streaming pipeline.
What other advice do I have?
Those who want to implement their workload in Kubernetes can create it. It's automatically scalable. So you don't have to maintain your service. It will be automatically adjusted based on your workload and needs.
The other thing is, when you are using microservice kind of development, like, now it is the programming language for microservices. So when we use microservices, it can be easily managed using Kubernetes. It makes it easy to find an error because the solution is really helpful.
And if microservices, the whole application won't fail. Just the deployment notes, that may cause an error in our application. That's the only failure. The whole application won't fail. So it would be helpful. You have to use a microservice kind of development in your development environment and try to implement it as a container and delete the container workloads in Kubernetes. Using deployment or domain service, and our project will be automatically maintained.
Overall, I would rate the solution a nine out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
CTO at Translucent Computing Inc
Fully Google ecosystem integrated, saves valuable time, and rapid deployment
Pros and Cons
- "The main advantage of GKE is that it is a managed service. This means that Google is responsible for managing the master node in the Kubernetes cluster system. As a result, we can focus on deploying applications to the slaves, while Google handles any updates and security patches. The fact that GKE is fully integrated into the Google ecosystem, including solutions such as BigQuery and VertexAI. This makes it easier for us to integrate these tools into our process. This integration ultimately speeds up our time to market and reduces the time and effort spent on managing infrastructure. The managed aspect of GKE allows us to simply deploy and utilize it without having to worry about the technicalities of infrastructure management."
- "While the GKE cluster is secure, application-level security is an essential aspect that needs to be addressed. The security provided by GKE includes the security of communication between nodes within the cluster and the basic features of Kubernetes security. However, these features may not be sufficient for the security needs of an enterprise. Additional security measures must be added to ensure adequate protection. It has become a common practice to deploy security tools within a Kubernetes cluster. It would be ideal if these tools were included as part of the package, as this is a standard requirement in the industry. Thus, application-level security should be integrated into GKE for improved security measures."
What is our primary use case?
The primary purpose for utilizing Google Kubernetes Engine (GKE) is for application deployment. This managed cloud service eliminates the need for operating our own Kubernetes cluster. Our applications are designed in a microservice architecture, meaning they are comprised of numerous smaller components, each running on its own container. GKE acts as an orchestration engine for these containers, managing and organizing them. In essence, GKE serves as a platform for both application development and deployment within a Kubernetes cluster. This is our main use case for GKE.
In addition to deploying applications, we also utilize GKE for deploying our machine learning models. By containerizing these models, we are able to deploy them within the Kubernetes cluster, making it easier for our applications to communicate with them. As the application and the model are co-located within GKE, it is simpler for us to manage this process and make predictions in a timely manner. This is an advantageous use case for us.
Machine learning is not a use case that we utilize GKE. Instead, we have our own platform in place to deal with GKE. Although GKE is an open engine, it still requires compliance, security, and observability. Thus, we provide these elements to our clients. This includes observability through the collection of metrics and logs from all containers within GKE, which we aggregate and display through dashboards and dashboarding tools. Additionally, we have built our own security system within GKE, which includes hosting security tools to manage passwords, secrets, and certificates. These tools are also deployed within GKE.
In addition to application deployment, our continuous integration and continuous deployment (CI/CD) pipeline are housed within GKE. Our pipeline includes tools, such as Jenkins and Slack, which aid in the building, containerization, and deployment of software. This comprehensive pipeline within GKE streamlines the development process and allows for the efficient and effective release of our software. Currently, GKE serves all of our use cases related to software development and deployment.
How has it helped my organization?
In our organization, GKE is utilized to orchestrate containers that hold microservices, which combine to form an application. Furthermore, we also utilize GKE to host self-hosted databases and build our own data pipelines. As a result, GKE acts as the foundation for our data platform, supporting multiple different types of databases within the cluster. The solution has been helpful for our organization.
What is most valuable?
The main advantage of GKE is that it is a managed service. This means that Google is responsible for managing the master node in the Kubernetes cluster system. As a result, we can focus on deploying applications to the slaves, while Google handles any updates and security patches. The fact that GKE is fully integrated into the Google ecosystem, including solutions such as BigQuery and VertexAI. This makes it easier for us to integrate these tools into our process. This integration ultimately speeds up our time to market and reduces the time and effort spent on managing infrastructure. The managed aspect of GKE allows us to simply deploy and utilize it without having to worry about the technicalities of infrastructure management.
Recently, Google has introduced new features to GKE. One of the latest additions includes a managed backup service, which backs up the disks attached to the containers within the platform. This service is a valuable asset provided by Google. Furthermore, they also offer configuration management, providing all the necessary infrastructure and services to accompany the use of Kubernetes. This saves time and reduces the effort needed to manage the cluster, allowing for a more focused approach toward business-critical tasks, such as containers, building pipelines, and more. GKE provides the necessary support and resources to allow for rapid deployment and efficient management.
What needs improvement?
While the GKE cluster is secure, application-level security is an essential aspect that needs to be addressed. The security provided by GKE includes the security of communication between nodes within the cluster and the basic features of Kubernetes security. However, these features may not be sufficient for the security needs of an enterprise. Additional security measures must be added to ensure adequate protection. It has become a common practice to deploy security tools within a Kubernetes cluster. It would be ideal if these tools were included as part of the package, as this is a standard requirement in the industry. Thus, application-level security should be integrated into GKE for improved security measures.
Additionally, a crucial aspect that was previously lacking was a reliable backup system. Although Google has recently released a beta version of GKE backups, it still requires improvement. Within a cluster, many components, such as databases, have a state and a disk attached to them. Hence, it is essential to have both physical snapshots of the disk and logical backups of the data. However, the backup system offered by GKE is not yet fully developed and requires more work to become a robust enterprise feature. For enterprise applications, it is imperative to manage state and take regular backups due to the Service Level Agreements (SLAs) signed with clients, which often require multiple backups per day. Thus, further development and improvement of the backup system are necessary.
For how long have I used the solution?
I have been using Google Kubernetes Engine for approximately six years.
What do I think about the stability of the solution?
GKE is extremely stable, with very few issues related to stability. This is due to frequent and continuous updates to the system. In the world of Kubernetes, it is common to maintain one version behind and two versions ahead, allowing for a clear understanding of upcoming releases and the ability to subscribe to the latest versions. Google is always at the forefront of updates and releases, and users have the option to either use the latest and most cutting-edge versions or stick with the stable and tried-and-true versions. There are no problems or concerns with stability in GKE.
What do I think about the scalability of the solution?
GKE was designed with scalability as its core feature, offering both flexibility and scalability in its functionality. It is easily adaptable for scaling both horizontally and vertically, making it ideal for our machine-learning tasks as well. The ability to attach a GPU to a node in the Kubernetes cluster is a straightforward process, providing us with the option to deploy a Kubernetes cluster with or without video cards, based on our specific use case requirements. The horizontal scalability of GKE is instantaneous, as the solution was specifically engineered to excel in this aspect. The scalability of GKE is one of its most valuable features, making it a prime selling point.
How are customer service and support?
Regarding support from GKE, I have limited knowledge. Our team is highly skilled in the field and would not require support from Google. In fact, I have communicated to Google that we do not require certification from them, as we are already Kubernetes certified and feel no need to be Google certified. I believe there is no return on investment for us in obtaining this certification. Despite Google's efforts to encourage us, we have informed them that they should focus on getting certified themselves rather than having us certified. Our team has a vast amount of experience and knowledge in the field, having been involved in the beta project even before Google knew the ins and outs of the technology. Therefore, we are capable of resolving any issues that arise on our own, without the need for assistance from Google.
Which solution did I use previously and why did I switch?
This solution is better than Amazon and Azure.
How was the initial setup?
Deploying GKE is a swift and seamless process, accomplished by running scripts. Our approach to infrastructure is based on the principle of infrastructure as code, utilizing Terraform for all operations. Google offers Terraform integration, further simplifying the process. Instead of manual intervention through the console or script writing, we choose to automate every aspect of our deployment, including GKE deployment, through Terraform. The cloud engineering provided by Google encompasses all the necessary tools to rapidly deploy and manage GKE, freeing us from the tedious task of managing individual components of the cluster.
Getting started with GKE is relatively simple, but ensuring proper deployment can be challenging. The ease of use, with just a click of the mouse button, does not guarantee secure and compliant deployment. Google should do more to educate users on the proper way to deploy GKE and provide resources such as recipes or integrate these best practices into the standard offering. For example, making the GKE public should be avoided as it poses a security risk, as each node in the cluster is publicly facing the internet, making it vulnerable to attacks by hackers who could target any of the nodes and potentially access a piece of the application and data.
The requirement of a private deployment in GKE comes with the need for extra configuration and networking setup, which can pose a challenge for developers and companies who are not familiar with the process. Although Google provides guidance and best practices, it is still necessary to have a good understanding of network engineering in order to successfully deploy Kubernetes. The complexity of the process can result in incorrect or insecure versions of Kubernetes being deployed, as seen with the recent hack on Tesla's GKE due to their improper deployment. Ideally, these configurations and setup steps should be integrated into the solution itself, eliminating the need for excessive technical expertise.
I rate the setup of Google Kubernetes Engine a seven out of ten.
What's my experience with pricing, setup cost, and licensing?
The pricing for GKE is dependent on the type of machine or virtual machine (VM) that is selected for the nodes in the cluster. There is a degree of flexibility in choosing the specifications of the machine, such as the number of CPUs, GPUs, and so on. Google provides a variety of options, allowing the user to create the desired cluster composition. However, the cost can be quite steep when it comes to regional clusters, which are necessary for high availability and failover. This redundancy is crucial for businesses and is required to handle an increase in requests in case of any issues in one region, such as jumping to a different region in case of a failure in the Toronto region. While it may be tempting to choose the cheapest type of machines, this may result in a limited capacity and user numbers, requiring over-provisioning to handle additional requests, such as those for a web application.
The cost of using GKE, which includes having a redundant system and failover capacity, appears to be overly high. The requirement of having this extra capacity in case of disk failure or other issues means paying for the extra provision, which contributes to the elevated cost. This pricing model seems to be an unfair practice on Google's part as redundancy is a fundamental aspect of any business and must be paid for regardless of whether it is used or not. When it comes to general pricing, the choice of what is best for the specific use case is left to the user.
What other advice do I have?
I rate Google Kubernetes Engine an eight out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
CTO at Translucent Computing Inc
Extremely scalable, easy to setup, and has good machine learning
Pros and Cons
- "The deployment of the cluster is very easy."
- "Our critique is that we have to do too much work to get the cluster production-ready."
What is our primary use case?
We have everything in Kubernetes. We're basically moving everything from the cloud into Kubernetes - inverting the cloud. We have all that built for the CIT pipeline and have our tools within the cluster.
This is to support application development. The application side is always within the cluster. We have a security cluster. So everything is there. We have a database within the cluster as well. We don't need a managed database. There's a cloud database due to the fact that we use Kubernetes database. Everything goes into the cluster.
It makes it easy for us to be consistent across different environments, including development environments or in Oracle environments, as everything runs within the cluster.
What is most valuable?
The solution allows you to work on and from multiple clouds. You can use Google's cloud, or mix and match clouds across suppliers.
You can split into regions within your own cloud.
The deployment of the cluster is very easy. You just click a button and it's deployed, or just run a simple command and it deploys itself. You don't have to go through the steps of installing the cluster yourself. It's already deployed and managed.
The master of the cluster is also managed by Google. If there are any updates, they are responsible to handle that. It just takes a little bit of a load from our task load. You don't have to manage the master, or the version of the cluster yourself.
You don't have to think about the installation process. They take care of the underlying infrastructure deployment and managing the versioning of the cluster. When we need to update, it's simple. They'll help us to easily, smoothly update those cluster nodes. You don't have to deal with that either.
When it comes to the Google Cloud, the Kubernetes advantage that's there for machine learning is that they have a CPU, which is a central processing unit, which is much faster than GPU. If the clients are willing to pay for it, we'll run the machine learning jobs within the Kubernetes cluster, then connect to Google CPU, which gives us the ability to finish the job much, much faster.
What needs improvement?
It's maybe a controversial topic, as Kubernetes itself should be just your bottom layer. However, within your own engine, you expect to do more with time. Since we're putting so much into the cluster, it would be nice if some of this stuff was already done, baked into the cluster.
Our critique is that we have to do too much work to get the cluster production-ready. Most people just start it and think that's production. That's not really production. That's just bootstrapping the cluster, with all the tools that you need.
A lot of people rely on cloud tools, or a cloud-built system, to get going. We would like to have that baked into the cluster. Due to our usage pattern of the cluster and how heavily we use it, our expectation is to have more tools baked into the cluster. There should be more emphasis on tools developed immediately from the cluster to support application development versus relying on third-party vendors, like Jenkins.
The third-party vendors have to adapt to Kubernetes, and that creates a problem, as there's always a delay. Third parties don't have much incentive to do anything right away. That means we have to wait for these guys to catch up. We don't have a big enough team to actually change every open source code, as there's so much of it.
For how long have I used the solution?
We started using Kubernetes in 2015, around the time it started. Whenever Google launched their tools is about the time we started. Before that, we used Kubernetes, however, we were deploying it ourselves.
What do I think about the stability of the solution?
The solution is stable.
The solution is cloud-native and every cloud is using basically the same version. That's what makes it easy for us to move between clouds. Google wants users to integrate with their own cloud storage and security, however, which is where issues can arise.
It allows you to create private clusters. There's no competitive advantage for cluster clouds right now, which is good for us, as it's a uniform-looking ecosystem that allows us to move between clouds easily.
What do I think about the scalability of the solution?
Kubernetes is designed to scale horizontally and vertically as well. It scales quite well.
We have unlimited scaling through horizontal scaling. We can add more Kubernetes nodes. When applications do grow, we need to maybe horizontally scale applications or our databases. We just kick off another node when we need however much memory CPU and just keep on scaling it. Obviously, you pay for it, however, scaling is extremely easy.
A lot of time we automate the scaling as well. Based a little bit on AI and cloud automation processing, we detect the CPU usage or the GPU usage and if we exceed a certain threshold, the cluster automatically adds another data node, so it's self-serving. So scaling is almost automated around cases that we do.
The system knows by itself how to scale dynamically. The dynamic elastic scaling is baked into our systems as well. When people do use, for example, the machine learning special cluster, that one goes automatically from zero to 23 nodes depending on the users. A lot of times, we do shut it down, with just no usage. When people kick off a job, it automatically spins up on a new cluster node and deploys the job, gets another job, and spins up another one. It dynamically grows. We have to allocate pools that we have of an increased cluster in Google Cloud and it works well.
How are customer service and technical support?
We basically are able to solve our own problems on our end. We don't need the assistance of technical support. We might have used it once in the past six or seven years and that is it.
How was the initial setup?
The initial setup is very straightforward. Everything is basically done for you. It's nice and easy.
We can do the cluster management. There are cluster administrators who take a more serious role. They are responsible for the disks backing some of these applications, the databases, deployment of these tools, the infrastructure tools, et cetera.
What about the implementation team?
We have application developers which work with the administrators to actually deploy the application, as you still have to just be meaningful in the cluster, and add some sort of business logic. Those have to work with our administrators so we get that done and we do support, and a lot of different tools, monitoring tools for visibility. It's important to us, because if we do, we follow a microservice architecture across applications. They're like little black boxes and we have to be able to see inside of them.
What's my experience with pricing, setup cost, and licensing?
Multi-cloud is a sort of an expensive endeavor as the tools are overpriced. We're looking at options that aren't based on Anthos, which is Google's multi-cloud solution.
While you pay money to Google, they also take a piece of the action as well.
CPU is very cheap, however, GPU is very expensive. If you want to iterate on your data client's tasks within a Kubernetes cluster, it will cost you.
There is no licensing cost. You pay for the cloud and you pay for what you use based on the CPU and RAM usage based on the VM, the virtual machines. The cluster is still made up of computers, so you pay for the computers that are backing the clusters. If you kick off a Kubernetes node, which has three nodes in your cluster, you have to pay for each of these nodes, these computers, these virtual machines that you get bootstrapped with. You just use the machine time as with any cloud and you get a price in Google for the machine type and your machine type is defined based on your CPU and RAM usage. If you want to have 60 GBs of RAM, you pay for that RAM or for CPUs.
The same thing is true if you ask for a GPU computer, as most of the virtual machines don't come with a video card unless you ask for it. Then you have to pay for that the computer and the video card
What other advice do I have?
We're actually certified as well Kubernetes vendor.
We're using version 1.19. The most up-to-date is 1.20. We're never on the latest version, we're always like a version behind, or even two versions behind, to give them time to sort through their issues. We're using 1.19 in both Azure's, Google Cloud's, and EKS, however, EKS might be two versions behind, maybe.
Most of the time we're deploying in, as a private cluster within the cloud. It's isolated from public infrastructure. That's for security reasons. We don't want our cluster to be exposed to the public internet.
We also have a hybrid deployment Azure on-premises. This is just to make things easier for integration purposes. On-premise, it's connected to the cloud and then we can just use the same tools to be Kube-Native source. We develop the same tools for Kubernetes and then we can just deploy Kubernetes on-premises or in the cloud, it doesn't matter.
We also are doing multi-cloud as well, and we're deploying from Google Cloud into AWS.
With Azure, we have one giant cloud right now. That way, we can partition a cluster and see multiple clouds and multiple visions. If Google Cloud goes down for whatever reason, as it happened, two years ago, due to bad configurations, too many clusters in a cloud, we're covered. We do multi-cloud as the solution is critical and we can't afford to have it go down.
We are basically are a full-service company. We do everything for our clients - including application development and everything that entails.
I'd advise users to take security seriously. Don't just deploy things on the internet. Make sure your cluster's secure. You want to be able to tell your clients that you have a secure implementation of a cluster. That requires a little bit of cloud set up with every cloud to create a private network, private subnetwork, manage the ingress and egress, so input and outputs of the requests coming into your cluster.
These are things you have to think about when you deploy, just initially before you get started. All the clouds support it, you just have to know how to set up your VPC, virtual private network connection tier with every cloud and how to set up subnets to isolate your cluster to specific subnets, so it's not exposed on the internet, it's private and then any requests coming from the internet have to go to your load balancer directly to your cluster.
However, if you manage that and some peoples' requests going out of your cluster, you won't be able to manage those as well, since they're on NAT and a cloud router as well. So you know what's coming in, what's going out. You can monitor your traffic coming in and within your cluster.
These days a lot of people just use the containers directly from third-party sources or public repositories in the docker containers in which the Kubernetes cluster runs and those could come with malware. You want basically, in the main cluster, to have security policies implemented for every cluster. You don't get that from your cluster loggers. You have to get that from third-party vendors. This is where the competition comes with the Kubernetes.
In general, I would rate the solution at an eight out of ten.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Disclosure: I am a real user, and this review is based on my own experience and opinions.
DevOps Engineer at a tech vendor with 51-200 employees
Effective project management with improved permissions but complex configurations
Pros and Cons
- "The most beneficial feature is the ability to separate each project and manage permissions more effectively."
- "The primary area for improvement would be the complexity involved when working with Google Kubernetes Engine, especially when using Terraform."
What is our primary use case?
We use Google Kubernetes Engine primarily for our production clusters, running several microservices and main services. We have one main separate cluster for production testing, and for our actual production, we manage separate clusters.
How has it helped my organization?
Google Kubernetes Engine has helped us manage our infrastructure more securely, especially when separating projects and assigning permissions. This categorization enhances security as we streamline roles and permissions management.
What is most valuable?
The most beneficial feature is the ability to separate each project and manage permissions more effectively. This categorization is especially useful for security purposes. I find managing IAM roles in GCP to be better than AWS.
What needs improvement?
The primary area for improvement would be the complexity involved when working with Google Kubernetes Engine, especially when using Terraform. It can be more complex compared to AWS.
Additionally, the process of managing IAM roles and integration with other Google services can be cumbersome and could use some simplification.
For how long have I used the solution?
I have been using Google Kubernetes Engine for about one year to one and a half years.
What do I think about the stability of the solution?
We have not encountered any major stability issues with the Google Kubernetes Engine. Aside from the usual errors that occur day-to-day, such as image pull-back errors, we maintain a stable environment by using versions that are one or two versions behind the latest release.
What do I think about the scalability of the solution?
The auto-scaling performance is really good in both GCP and AWS. I have not experienced any issues with auto-scaling capabilities, and they meet our demands efficiently.
How are customer service and support?
Usually, our upper management takes care of any escalations to tech support, so I do not have direct experience with their customer service.
How would you rate customer service and support?
Neutral
How was the initial setup?
The initial setup of Google Kubernetes Engine took me about two days. I primarily used Terraform scripts for deployment and testing.
What about the implementation team?
Initially, when starting with Google Kubernetes Engine, I required some help, especially with configurations involving Helm charts and additional components such as the ingress controller. Once everything was set up, maintaining it became more manageable.
What was our ROI?
Google Kubernetes Engine has been cost-effective and has improved our operational productivity. However, GKE can be more expensive compared to AWS when it comes to certain services like Compute Engine. Integrating with multiple cloud providers is easier with GCP, making it a flexible solution for our diverse requirements.
What's my experience with pricing, setup cost, and licensing?
I'm aware of the normal pricing, but it's not on top of my head. AWS is generally cheaper than GCP for most use cases. Costing fluctuates based on different purposes and sizes.
What other advice do I have?
If you are using multiple cloud provider services, such as DNS management from DigitalOcean or S3 buckets from AWS, integrating with Google is simpler than AWS. For smaller functions, services like AWS Lambda can be more cost-effective than running them on GKE. It is important to utilize the proper tools for easy maintenance.
I'd rate the solution seven out of ten.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Last updated: Sep 29, 2024
Flag as inappropriateSenior Software Engineer at Moniepoint
Efficient, and offers the ability to virtualize the database
Pros and Cons
- "We hardly have a breakdown. It's been very stable."
- "I would rate the scalability a seven out of ten."
What is our primary use case?
We use it for deploying our applications. All our applications are based on Kubernetes, so we create our products with Kubernetes.
What is most valuable?
I find Google's services very stable, and I appreciate some of the unique features it offers, like the ability to virtualize the database and access detailed analytics, which simplifies management.
Its main advantage is the technology itself, which allows our applications to scale easily. This scalability reduces downtime significantly.
For how long have I used the solution?
I started using it when I joined the company. Initially, I was more familiar with things around Azure, but Google Kubernetes Engine was my first experience with Google’s cloud services when I joined MoniePoint. I contacted Google and learned about the competitive cloud market with AWS, Azure, and Google.
What do I think about the stability of the solution?
I would rate the stability an eight out of ten. We hardly have a breakdown. It's been very stable.
We, the developers, do experience some downtime occasionally, but we are relatively new to it.
What do I think about the scalability of the solution?
I would rate the scalability a seven out of ten.
Which solution did I use previously and why did I switch?
I used Azure. The switch to Google Kubernetes Engine was due to a change in my employment. I started using Google when I joined this new place last year. It's a very efficient tool.
What about the implementation team?
The DevOps team takes care of this aspect.
What other advice do I have?
Overall, I would rate the solution a seven out of ten. It's worth trying out.
I would recommend Google as a cloud service option. I wasn't aware of how good it was initially, but having tested it, I see that it's very efficient and good. We hardly have any issues; so, it's very efficient and good.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Senior Solutions Architect at a tech vendor with 1,001-5,000 employees
Provides deployment across multiple regions and has a user-friendly setup process
Pros and Cons
- "The initial setup process is simpler and more user-friendly than other cloud providers."
- "The product's integration with third-party vendors needs improvement."
What is our primary use case?
We use the platform for transforming our product from VM-based to container-based. It involves migrating old monolithic applications to containers, which takes years.
What is most valuable?
One valuable feature of the product is openness to global networks, which allows for integration and deployment across multiple regions, which is only sometimes possible with other cloud providers.
What needs improvement?
The product's integration with third-party vendors needs improvement.
For how long have I used the solution?
I have been working with Google Kubernetes Engine for approximately two to three years.
What do I think about the stability of the solution?
I rate the product stability an eight.
What do I think about the scalability of the solution?
I rate the product scalability an eight.
How was the initial setup?
The initial setup process is simpler and more user-friendly than other cloud providers.
What other advice do I have?
Google Kubernetes Engine has made the deployment process easier than Amazon and Azure. It is a good product and often more cost-effective.
I recommend it to others and rate it an eight out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Last updated: Jun 28, 2024
Flag as inappropriateBuyer's Guide
Download our free Google Kubernetes Engine Report and get advice and tips from experienced pros
sharing their opinions.
Updated: January 2025
Product Categories
Container ManagementPopular Comparisons
VMware Tanzu Platform
Red Hat OpenShift Container Platform
Rancher Labs
Kubernetes
Nutanix Kubernetes Engine NKE
Amazon Elastic Container Service
HPE Ezmeral Container Platform
Buyer's Guide
Download our free Google Kubernetes Engine Report and get advice and tips from experienced pros
sharing their opinions.