Google Kubernetes Engine: It Shouldn’t Be That Difficult


You can’t deny that Google Kubernetes Engine (GKE) offers real benefits.

The most significant benefit of Google’s Kubernetes managed services offering, which is typically available about a year after Kubernetes, is that it greatly simplifies running Kubernetes clusters.

GKE solves the management headache introduced by Kubernetes through its two main planes of operation: the control plane and the data plane. The control plane manages the entire cluster and worker nodes; the data plane routes and manages the data that travels through the networks.

If you’re running a small number of Kubernetes clusters, this should make your life easier. But having two planes to manage makes it a major headache when rotating clusters, orchestrating nodes, and moving pods here and there in a disparate development environment.

So why doesn’t Google handle most of the control plane? That’s what GKE did when it was introduced in 2015. It basically abstracted the underlying infrastructure. This means developers can do what they’ve always done best: write great code. They don’t have to worry about the infrastructure elements managed by Kubernetes clusters (which is a concern for new supply chain attacks).

This means your DevOps teams can be more effective, whether you’re a startup or part of an agile team within the enterprise. They can focus on managing their CI/CD pipeline and spend less time moving nodes (or introducing errors as a result). As a result, you save time and maximize efficiency.

Another important advantage is that GKE, based on Kubernetes, supports this common Docker container format. DevOps teams can store and access Docker images and easily run Docker containers.

Eliminate the complexity of GKE

Yet despite all of GKE’s many advantages, many developers see it as a bit of rocket science. In a landscape where talent is scarce, where recruiting the right people into DevOps is hard enough (try to find a Scrum Master), learning how to use GKE optimally can be a daunting task.

Yes, it can simplify the complexity of Kubernetes, but it has a learning curve that many DevOps teams need to learn. Still, developers needed to know how to create a production-ready environment. And it only entered general development a few years ago.

“GKE technology has been around for a while (since 2006), but it became mainstream a few years ago,” said Ravi Paul, country manager for Malaysia at Research.

Then there is the scaling complexity. Scalability is another issue. Moving nodes to another new cluster is quite easy if your development and production environment is well aligned and simple. This is generally not the case in many companies. This means that enterprises are hampered by the tasks of provisioning and managing clusters.

Finally, GKE faces a calculation of flexibility. Although it was designed to bring some management sensibility to Kubernetes environments, it does require some talent. But without a properly trained environment, you can end up with a budget-busting GKE environment.

“Overall, the user experience of GKE has not been entirely positive. It takes effort to build and therefore be able to reap the benefits,” Paul said.

Google understood these challenges and introduced new solutions.

Join Ravi Paul and representatives from Google as they discuss how you can simplify Kubernetes development and provide an introduction to GKE at the conference titled “Kubernetes Engine Demystified: Getting Down to the Basics” held on 22 October 2022 in Kuala Lumpur. To register, please click here.

Making GKE Convenient for Mortals

In 2019, Google introduced a serverless container offering called Google Cloud Run. He summarized the parts of the infrastructure that developers don’t want to manage. It did this by managing the data plane.

Google Cloud Run had limitations. It didn’t support stateless applications, didn’t address flexibility, and too much abstraction wasn’t necessarily good for security.

The introduction of GKE Autopilot last year goes even further to fill in the gaps. While it offered the same manageability as Google Cloud Run by managing the control and data planes, it offered customization flexibility and better scalability management. A strong security posture (set by default) also provided the enterprise security that many developers needed to adhere to.

GKE Autopilot introduced the idea of ​​automatic status monitoring and automatic repair. Developers don’t have to scratch their heads calculating how much compute capacity their workloads need, and you streamline your cost management for the pods you use and aren’t charged extra for nodes under -used.

Essentially, GKE Autopilot helps you manage a Kubernetes development environment almost like a pro.

The value of partnerships

Even with GKE Autopilot deployed, it requires a special understanding of how Kubernetes works and what you’re automating. This is where a company like Searce comes in.

With decades of Kubernetes and development knowledge, a solid set of development resources, and experience working with startups and development teams within enterprises, he can work with DevOps teams to take the most optimized path. while offering the knowledge to take or even reinforce your own.

Paul also says that there is no one size fits all, especially when it comes to development.

For example, whether to choose the standard GKE route, Google Cloud Run, or GKE Autopilot isn’t just a matter of talent or efficiency. You also need to understand your team’s experience managing the data plane and the security constraints you’re working with.

Searce simplifies deploying and running GKE using a five-phase process.

  • Accurately assess the environment in which the client works

  • Presentation of evaluation data

  • Organize workshops to ensure good knowledge transfer

  • Appoint a dedicated technical account manager

  • Use managed service offerings to manage the customer’s environment 24×7

While many consulting firms may offer similar approaches, Paul pointed to Searce’s competitive differentiators.

“We started longer than our competitors. We have therefore created strong and trusting relationships with many customers. We have also done several deployments around the world whereas many of our competitors are typically born and raised in Southeast Asia. And when you work with Searce, we put that tremendous experience and expertise into your hands,” he said.

Winston Thomas is the editor of CDOTrends and DigitalWorkforceTrends. He is a Singularity follower, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].

Photo credit: iStockphoto/Jelena Danilovic

Source link


Comments are closed.