← Go back
Kubernetes vs. Docker: Why So Binary?
Plays within the Play
Like The Matrix, the world of computing isn’t actually visible in the gritty, resource-hungry real world. When we see a web page load on a screen, it doesn’t matter if we access that virtual world from a barely-functioning laptop, a late model smartphone, or a massive-screened computing colossus used to render modern blockbuster games and movies. Each device is a window into the world that the content lives in. And that world is not permanently fixed in size or features.
Short of giving a potted history of the computer industry, it’s still miraculous to think about the transition from the old world of one computer, one maximum possible capability, to one of “just wait a minute while I increase the size of our resources.” If you’re a Doctor Who fan, this “bigger on the inside” idea might seem obvious. But like science fiction shows, how things can change their state might seem like magic in the real world. With the introduction of modern virtualization systems, multiple computing platforms can be merged or split to increase the resources available to whatever tools are running within a virtualized system.
"I laugh at the software as if I'm 100% confident that it's 2019."
Living in the Virtual World
The need for scalability and the ability to isolate systems from the continuous changes that happen with infrastructure (the servers, storage, networks and access requests to use the combined system) led to the concept of virtualized computing. Virtual machines (VMs) can run as completely independent emulations of a computer with operating system and applications (system VMs), and more than one VM can run at the same time on a physical machine, commonly called the Host. Without getting into too much detail, there are many kinds of VMs that handle different types of system requirements, and many vendors of VM software, some of which may be in common use. VMware is many people’s introduction to a System VM, while Oracle’s Java Virtual Machine (JVM), and node.js are both Process VMs, which separate functions from the underlying hardware, but all are commonly in use around the world. The major reason for deploying VM technologies is to make better use of infrastructure resources and make systems more resilient and easier to scale. Docker is a specific type of virtualization system that containerizes and allows for the distribution and automation for applications.
So, if Docker allows us to create a micro-universe where we can freeze time, or change the rules to make things run better than they could possibly run with the application installed in a non-virtual computer, then Kubernetes (Greek for Helmsman or Governor), acts as the organizer or conductor for the containers created by Docker, or other virtual container technologies.
Kubernetes was created at Google in 2014 and released version 1 in 2015, from an earlier project known as Borg, from the infamous antagonists in the Star Trek universe. If you’re curious about the relevance of this theme for names, consider that the Borg, whenever they encounter something they don’t know how to respond to, apparently divert all appropriate research and development to learn about the novelty being encountered, and quickly develop a response that allows the original intent to proceed. So, Kubernetes lets system administrators respond to novel changes in a system’s requirements to respond to new conditions in the world. Suddenly the world discovers your website, and responses plummet because of increased hits to the site? System admins can increase the internet bandwidth available, processing power, and database size by adjusting the variables associated with those functions. It’s even possible to configure a Kubernetes (often abbreviated as K8s) cluster to automatically reconfigure itself in case of failure of node failure within a system.
Managing the Real World
KubeCON 2018 Keynote: Migrating 150+ Microservices with Kubernetes.
Possibly one of the best case studies of what makes Kubernetes so versatile, above what can be done with virtualization, including Docker containers, is the Financial Times media organization, which predominantly operates as an online media source. Presenting their experience in the KubeCon 2018 keynote talk, Sara Wells delivers a compelling tale of migrating the massive range of discrete microservices being used by the Financial Times to deliver their content around the world. The core benefits were a dramatic saving in cloud-computing charges for the 150+ services they ran, all without needing to re-engineer the inner architectures of each of those services and dramatically improving the quality of the code that made up each of those services. There was certainly overhead in development and administration, and the time to move from their baseline to overall improvement was challenging, but the end results were well worth it, for many different immediate and long-term benefits, fiscal, service level, and developer morale.
The Whole is Greater than the Sum of its Parts
While the packaging and resource allocation available within Docker is similar to automating resource deployment with Kubernetes, using the two tool suites together delivers resilient, fault-tolerant service deployments that scale really well and can be maintained with significantly less effort and risk than other alternatives.
If your services need high availability, responsive and hassle-free scalability, and satisfied development and admin staff, you need to look at the benefits you can realize by implementing your services in containers, whether Docker-based or other containerizations, and managed via Kubernetes.
Check out our DevOps & Deployment Services!
Andrew Manshin
Pieoneers CTO