DevOps Engineer at a tech vendor with 51-200 employees
Real User
Top 10
2024-09-26T07:17:00Z
Sep 26, 2024
We use Google Kubernetes Engine primarily for our production clusters, running several microservices and main services. We have one main separate cluster for production testing, and for our actual production, we manage separate clusters.
We use the platform for transforming our product from VM-based to container-based. It involves migrating old monolithic applications to containers, which takes years.
We use the solution for running microservices for different healthcare workflows like authorization claims and adjudication with no downtime and to do continuous deployments.
Packaged App development Senior Analyst at Accenture
Real User
Top 5
2024-03-13T15:23:00Z
Mar 13, 2024
We used to deploy microservices on the Java platform. It took a lot of work to manage these skill sets. With Kubernetes, it is easy to use and deploy. It was very convenient, mostly for scalability purposes.
Principal Enterprise Architect at a tech vendor with 51-200 employees
Real User
Top 20
2023-11-23T10:41:37Z
Nov 23, 2023
I have deployed the solution as a service into my private cloud, as well as into Azure infrastructure. Google Kubernetes Engine is useful for cloud-native business applications, especially microservices-based architectures since business applications require scalability and resilience and must be highly available. Typically, Google Kubernetes Engine is used to deploy business applications and also to manage the integrations with the cloud services. There are a lot of SaaS solutions as service offerings provided by the Google Cloud Platform, so it helps with the integration to compile the solutions in the business space or, basically, the cloud-native space.
Team Lead at a tech services company with 201-500 employees
Real User
Top 20
2023-10-12T07:09:33Z
Oct 12, 2023
The Google Kubernetes Engine is used for managing data workloads. As a part of the platform engineering team, we are uncertain about the specific data workloads running. Primarily, the application teams wanted to run specialized workflows, while the data engineering workflow preferred to opt for GKE. After a thorough evaluation of the different versions of the Kubernetes platform over a week, we informed the application teams about the available options. In case of confusion, we clarify the advantages and disadvantages of each feature. However, when the application teams express their preference for GKE, we support their decision, as they comprehensively understand the use cases for both EKS and GKE. This invariably leads to a smoother onboarding process, as we are well-equipped to cater to their specific requirements.
I'm using a different infrastructure-as-code engine, Terraform, to create Kubernetes clusters. I specify the machine type and memory requirements in my Terraform configuration, and Terraform sets up the network. With Google Kubernetes Engine (GKE), Google manages the Kubernetes control plane, so I only need to focus on creating and managing nodes. Currently, I'm creating pre-node Kubernetes clusters, including private clusters for security. Workloads can be deployed to GKE using YAML files or the Kubernetes CLI. To expose deployments to end users, I create load balancers. I use cluster autoscaling and HBA host port autoscaling to automatically maintain my workloads at the desired size. GKE also provides various options for load balancing, including ingress. QoS handles credentials using secret resources, and configuration is done using ConfigMaps. The main workflow is to create deployments, ports, services, secrets, and configuration maps.
I am just starting with it. I am testing different platforms. I've done some deployments, and with some samples, I've tried to install the Kubernetes application. I am using its latest version.
We use Kubernetes for our data pipelines. For everything else, we use the standard version of GKE, and we manage the whole stack. It's a bit higher than infrastructures as a service. It's more of a platform as a service. Kubernetes is the platform we use to build our staff products. We're a Kubernetes certified company. Everything in the cloud, we push toward Kubernetes. You can move Kubernetes between systems, and we're deploying the system into Kubernetes securely. We're running a lot of fintech applications and healthcare applications, so both financial data and patient healthcare is essential. That's one of the reasons we moved to Google Cloud, because the implementation is a little more secure. Kubernetes is the number one tool used by every company in the last five years because it's a dynamic container runtime engine, and we ship software in containers. We've been shipping software containers since 2014. We switched to Kubernetes around 2016 or 2017. We switched completely to Kubernetes as our container runtime engine. That's how we ship software and maintain managed software. Kubernetes is the primary tool for all our work that we do in DevOps.
Google Kubernetes Engine is used for orchestrating Docker containers. We have 30 or 40 customers working with this solution now. We'll probably see 10 to 15 percent growth in the number of customers using Google Kubernetes Engine in the future.
We have everything in Kubernetes. We're basically moving everything from the cloud into Kubernetes - inverting the cloud. We have all that built for the CIT pipeline and have our tools within the cluster. This is to support application development. The application side is always within the cluster. We have a security cluster. So everything is there. We have a database within the cluster as well. We don't need a managed database. There's a cloud database due to the fact that we use Kubernetes database. Everything goes into the cluster. It makes it easy for us to be consistent across different environments, including development environments or in Oracle environments, as everything runs within the cluster.
This is Prophaze deployed in Kubernetes. So we have a Kubernetes. For example, Uber, they have all their sectors deployed in Kubernetes. There should be a native production for Kubernetes. We can have a separate cloud security solution for Kubernetes. Your solution has to be in Kubernetes. Because it should be a Kubernetes solution. So the WAF, in general, can be deployed in Kubernetes as a network solution.
Kubernetes Engine is a platform that spins off applications so they can be run at scale at a high level. We are currently migrating from on-premises to the cloud version.
Head of Infra and Applications support department at a financial services firm with 201-500 employees
Real User
2019-05-22T07:18:00Z
May 22, 2019
Our primary use case is to arrange the correct CICD (Continuous Integration / Continous Deployment) conveyor to provide for continuous changes in production.
Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market.
We use Google Kubernetes Engine primarily for our production clusters, running several microservices and main services. We have one main separate cluster for production testing, and for our actual production, we manage separate clusters.
We use the platform for transforming our product from VM-based to container-based. It involves migrating old monolithic applications to containers, which takes years.
We use the solution for running microservices for different healthcare workflows like authorization claims and adjudication with no downtime and to do continuous deployments.
We used to deploy microservices on the Java platform. It took a lot of work to manage these skill sets. With Kubernetes, it is easy to use and deploy. It was very convenient, mostly for scalability purposes.
The product helps us to manage Docker easily using automation.
In our company, the microservices are deployed into the containers in Google Kubernetes Engine.
I have deployed the solution as a service into my private cloud, as well as into Azure infrastructure. Google Kubernetes Engine is useful for cloud-native business applications, especially microservices-based architectures since business applications require scalability and resilience and must be highly available. Typically, Google Kubernetes Engine is used to deploy business applications and also to manage the integrations with the cloud services. There are a lot of SaaS solutions as service offerings provided by the Google Cloud Platform, so it helps with the integration to compile the solutions in the business space or, basically, the cloud-native space.
We use the product to host a loyalty application.
The Google Kubernetes Engine is used for managing data workloads. As a part of the platform engineering team, we are uncertain about the specific data workloads running. Primarily, the application teams wanted to run specialized workflows, while the data engineering workflow preferred to opt for GKE. After a thorough evaluation of the different versions of the Kubernetes platform over a week, we informed the application teams about the available options. In case of confusion, we clarify the advantages and disadvantages of each feature. However, when the application teams express their preference for GKE, we support their decision, as they comprehensively understand the use cases for both EKS and GKE. This invariably leads to a smoother onboarding process, as we are well-equipped to cater to their specific requirements.
I'm using a different infrastructure-as-code engine, Terraform, to create Kubernetes clusters. I specify the machine type and memory requirements in my Terraform configuration, and Terraform sets up the network. With Google Kubernetes Engine (GKE), Google manages the Kubernetes control plane, so I only need to focus on creating and managing nodes. Currently, I'm creating pre-node Kubernetes clusters, including private clusters for security. Workloads can be deployed to GKE using YAML files or the Kubernetes CLI. To expose deployments to end users, I create load balancers. I use cluster autoscaling and HBA host port autoscaling to automatically maintain my workloads at the desired size. GKE also provides various options for load balancing, including ingress. QoS handles credentials using secret resources, and configuration is done using ConfigMaps. The main workflow is to create deployments, ports, services, secrets, and configuration maps.
Mainly, we target SMEs for Kubernetes services. Currently, we have a few customers, and they expect more customized experiences.
We primarily use Google Kubernetes Engine for hosting applications. We use it for hosting our microservices-based applications.
I use the tool to host a SaaS application.We also provision the clusters to help students learn how to use Kubernetes.
We use this solution to handle big data workloads in that specific NGK.
I use the solution to orchestrate different containers that need microservice architecture.
We use it for all applications.
I am just starting with it. I am testing different platforms. I've done some deployments, and with some samples, I've tried to install the Kubernetes application. I am using its latest version.
We use this solution to run the AI models that we have developed for anomaly detection, cloud workload prediction, as well as setting up clusters.
All of our clients are using GKE lightly. The companies are big, but the usage is small.
We have somewhere around 20 microservices that we need to deploy for our product. We are using Kubernetes Engine to deploy those 20 microservices.
We use Kubernetes for our data pipelines. For everything else, we use the standard version of GKE, and we manage the whole stack. It's a bit higher than infrastructures as a service. It's more of a platform as a service. Kubernetes is the platform we use to build our staff products. We're a Kubernetes certified company. Everything in the cloud, we push toward Kubernetes. You can move Kubernetes between systems, and we're deploying the system into Kubernetes securely. We're running a lot of fintech applications and healthcare applications, so both financial data and patient healthcare is essential. That's one of the reasons we moved to Google Cloud, because the implementation is a little more secure. Kubernetes is the number one tool used by every company in the last five years because it's a dynamic container runtime engine, and we ship software in containers. We've been shipping software containers since 2014. We switched to Kubernetes around 2016 or 2017. We switched completely to Kubernetes as our container runtime engine. That's how we ship software and maintain managed software. Kubernetes is the primary tool for all our work that we do in DevOps.
Google Kubernetes Engine is used for orchestrating Docker containers. We have 30 or 40 customers working with this solution now. We'll probably see 10 to 15 percent growth in the number of customers using Google Kubernetes Engine in the future.
We have everything in Kubernetes. We're basically moving everything from the cloud into Kubernetes - inverting the cloud. We have all that built for the CIT pipeline and have our tools within the cluster. This is to support application development. The application side is always within the cluster. We have a security cluster. So everything is there. We have a database within the cluster as well. We don't need a managed database. There's a cloud database due to the fact that we use Kubernetes database. Everything goes into the cluster. It makes it easy for us to be consistent across different environments, including development environments or in Oracle environments, as everything runs within the cluster.
This is Prophaze deployed in Kubernetes. So we have a Kubernetes. For example, Uber, they have all their sectors deployed in Kubernetes. There should be a native production for Kubernetes. We can have a separate cloud security solution for Kubernetes. Your solution has to be in Kubernetes. Because it should be a Kubernetes solution. So the WAF, in general, can be deployed in Kubernetes as a network solution.
Kubernetes Engine is a platform that spins off applications so they can be run at scale at a high level. We are currently migrating from on-premises to the cloud version.
We primarily use this solution for authorization and service deployment.
Our primary use case is to arrange the correct CICD (Continuous Integration / Continous Deployment) conveyor to provide for continuous changes in production.