What is our primary use case?
The primary purpose for utilizing Google Kubernetes Engine (GKE) is for application deployment. This managed cloud service eliminates the need for operating our own Kubernetes cluster. Our applications are designed in a microservice architecture, meaning they are comprised of numerous smaller components, each running on its own container. GKE acts as an orchestration engine for these containers, managing and organizing them. In essence, GKE serves as a platform for both application development and deployment within a Kubernetes cluster. This is our main use case for GKE.
In addition to deploying applications, we also utilize GKE for deploying our machine learning models. By containerizing these models, we are able to deploy them within the Kubernetes cluster, making it easier for our applications to communicate with them. As the application and the model are co-located within GKE, it is simpler for us to manage this process and make predictions in a timely manner. This is an advantageous use case for us.
Machine learning is not a use case that we utilize GKE. Instead, we have our own platform in place to deal with GKE. Although GKE is an open engine, it still requires compliance, security, and observability. Thus, we provide these elements to our clients. This includes observability through the collection of metrics and logs from all containers within GKE, which we aggregate and display through dashboards and dashboarding tools. Additionally, we have built our own security system within GKE, which includes hosting security tools to manage passwords, secrets, and certificates. These tools are also deployed within GKE.
In addition to application deployment, our continuous integration and continuous deployment (CI/CD) pipeline are housed within GKE. Our pipeline includes tools, such as Jenkins and Slack, which aid in the building, containerization, and deployment of software. This comprehensive pipeline within GKE streamlines the development process and allows for the efficient and effective release of our software. Currently, GKE serves all of our use cases related to software development and deployment.
How has it helped my organization?
In our organization, GKE is utilized to orchestrate containers that hold microservices, which combine to form an application. Furthermore, we also utilize GKE to host self-hosted databases and build our own data pipelines. As a result, GKE acts as the foundation for our data platform, supporting multiple different types of databases within the cluster. The solution has been helpful for our organization.
What is most valuable?
The main advantage of GKE is that it is a managed service. This means that Google is responsible for managing the master node in the Kubernetes cluster system. As a result, we can focus on deploying applications to the slaves, while Google handles any updates and security patches. The fact that GKE is fully integrated into the Google ecosystem, including solutions such as BigQuery and VertexAI. This makes it easier for us to integrate these tools into our process. This integration ultimately speeds up our time to market and reduces the time and effort spent on managing infrastructure. The managed aspect of GKE allows us to simply deploy and utilize it without having to worry about the technicalities of infrastructure management.
Recently, Google has introduced new features to GKE. One of the latest additions includes a managed backup service, which backs up the disks attached to the containers within the platform. This service is a valuable asset provided by Google. Furthermore, they also offer configuration management, providing all the necessary infrastructure and services to accompany the use of Kubernetes. This saves time and reduces the effort needed to manage the cluster, allowing for a more focused approach toward business-critical tasks, such as containers, building pipelines, and more. GKE provides the necessary support and resources to allow for rapid deployment and efficient management.
What needs improvement?
While the GKE cluster is secure, application-level security is an essential aspect that needs to be addressed. The security provided by GKE includes the security of communication between nodes within the cluster and the basic features of Kubernetes security. However, these features may not be sufficient for the security needs of an enterprise. Additional security measures must be added to ensure adequate protection. It has become a common practice to deploy security tools within a Kubernetes cluster. It would be ideal if these tools were included as part of the package, as this is a standard requirement in the industry. Thus, application-level security should be integrated into GKE for improved security measures.
Additionally, a crucial aspect that was previously lacking was a reliable backup system. Although Google has recently released a beta version of GKE backups, it still requires improvement. Within a cluster, many components, such as databases, have a state and a disk attached to them. Hence, it is essential to have both physical snapshots of the disk and logical backups of the data. However, the backup system offered by GKE is not yet fully developed and requires more work to become a robust enterprise feature. For enterprise applications, it is imperative to manage state and take regular backups due to the Service Level Agreements (SLAs) signed with clients, which often require multiple backups per day. Thus, further development and improvement of the backup system are necessary.
Buyer's Guide
Google Kubernetes Engine
February 2025
Learn what your peers think about Google Kubernetes Engine. Get advice and tips from experienced pros sharing their opinions. Updated: February 2025.
837,501 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Google Kubernetes Engine for approximately six years.
What do I think about the stability of the solution?
GKE is extremely stable, with very few issues related to stability. This is due to frequent and continuous updates to the system. In the world of Kubernetes, it is common to maintain one version behind and two versions ahead, allowing for a clear understanding of upcoming releases and the ability to subscribe to the latest versions. Google is always at the forefront of updates and releases, and users have the option to either use the latest and most cutting-edge versions or stick with the stable and tried-and-true versions. There are no problems or concerns with stability in GKE.
What do I think about the scalability of the solution?
GKE was designed with scalability as its core feature, offering both flexibility and scalability in its functionality. It is easily adaptable for scaling both horizontally and vertically, making it ideal for our machine-learning tasks as well. The ability to attach a GPU to a node in the Kubernetes cluster is a straightforward process, providing us with the option to deploy a Kubernetes cluster with or without video cards, based on our specific use case requirements. The horizontal scalability of GKE is instantaneous, as the solution was specifically engineered to excel in this aspect. The scalability of GKE is one of its most valuable features, making it a prime selling point.
How are customer service and support?
Regarding support from GKE, I have limited knowledge. Our team is highly skilled in the field and would not require support from Google. In fact, I have communicated to Google that we do not require certification from them, as we are already Kubernetes certified and feel no need to be Google certified. I believe there is no return on investment for us in obtaining this certification. Despite Google's efforts to encourage us, we have informed them that they should focus on getting certified themselves rather than having us certified. Our team has a vast amount of experience and knowledge in the field, having been involved in the beta project even before Google knew the ins and outs of the technology. Therefore, we are capable of resolving any issues that arise on our own, without the need for assistance from Google.
Which solution did I use previously and why did I switch?
This solution is better than Amazon and Azure.
How was the initial setup?
Deploying GKE is a swift and seamless process, accomplished by running scripts. Our approach to infrastructure is based on the principle of infrastructure as code, utilizing Terraform for all operations. Google offers Terraform integration, further simplifying the process. Instead of manual intervention through the console or script writing, we choose to automate every aspect of our deployment, including GKE deployment, through Terraform. The cloud engineering provided by Google encompasses all the necessary tools to rapidly deploy and manage GKE, freeing us from the tedious task of managing individual components of the cluster.
Getting started with GKE is relatively simple, but ensuring proper deployment can be challenging. The ease of use, with just a click of the mouse button, does not guarantee secure and compliant deployment. Google should do more to educate users on the proper way to deploy GKE and provide resources such as recipes or integrate these best practices into the standard offering. For example, making the GKE public should be avoided as it poses a security risk, as each node in the cluster is publicly facing the internet, making it vulnerable to attacks by hackers who could target any of the nodes and potentially access a piece of the application and data.
The requirement of a private deployment in GKE comes with the need for extra configuration and networking setup, which can pose a challenge for developers and companies who are not familiar with the process. Although Google provides guidance and best practices, it is still necessary to have a good understanding of network engineering in order to successfully deploy Kubernetes. The complexity of the process can result in incorrect or insecure versions of Kubernetes being deployed, as seen with the recent hack on Tesla's GKE due to their improper deployment. Ideally, these configurations and setup steps should be integrated into the solution itself, eliminating the need for excessive technical expertise.
I rate the setup of Google Kubernetes Engine a seven out of ten.
What's my experience with pricing, setup cost, and licensing?
The pricing for GKE is dependent on the type of machine or virtual machine (VM) that is selected for the nodes in the cluster. There is a degree of flexibility in choosing the specifications of the machine, such as the number of CPUs, GPUs, and so on. Google provides a variety of options, allowing the user to create the desired cluster composition. However, the cost can be quite steep when it comes to regional clusters, which are necessary for high availability and failover. This redundancy is crucial for businesses and is required to handle an increase in requests in case of any issues in one region, such as jumping to a different region in case of a failure in the Toronto region. While it may be tempting to choose the cheapest type of machines, this may result in a limited capacity and user numbers, requiring over-provisioning to handle additional requests, such as those for a web application.
The cost of using GKE, which includes having a redundant system and failover capacity, appears to be overly high. The requirement of having this extra capacity in case of disk failure or other issues means paying for the extra provision, which contributes to the elevated cost. This pricing model seems to be an unfair practice on Google's part as redundancy is a fundamental aspect of any business and must be paid for regardless of whether it is used or not. When it comes to general pricing, the choice of what is best for the specific use case is left to the user.
What other advice do I have?
I rate Google Kubernetes Engine an eight out of ten.
Disclosure: I am a real user, and this review is based on my own experience and opinions.