linuxdockerkubernetesdocker-containerfailovercluster

Kubernetes cluster vs HACMP cluster


HACMP cluster provides high availability feature with IBM lpar's or within AIX physical boxes

Similarly,

MSCS cluster service in windows Virtual machine

Veritas cluster for Linux/Windows Virtual machine


How kubernetes cluster service different from these cluster service?


Solution

  • Key Differences

    The TL;DR Backstory

    Clustering = teaming up multiple cooperating servers to accomplish something that none of the individual servers ("nodes") could accomplish on their own.

    The cluster products you mention--HACMP, MSCS, etc.--were designed in the 1990s (and evolved over time) primarily to provide higher app/service availability than any single server could guarantee. Given appropriate cluster-enabled apps, databases, and middleware, should one server in a cluster go down or suffer a serious fault, the app/service would continue operating on remaining nodes without interruption. In the best case, this can almost eliminate either unplanned or planned downtime.

    Kubernetes clusters have some high-availability features, but start with a very different worldview--one 20 years later from where HACMP and friends started. In IT, 20 years = multiple entire generations. Kubernetes and similar clusters (e.g. Docker Swarm) expect each server to host multiple "containers" (packaged workloads) rather than a single app/workload. Operating system containers are a lightweight form of app/system/service virtualization than basically didn't exist for mainstream applications for most of the HA clusters' lifetimes.

    The abstractions and capabilities of any platform evolves to match problems expected on common workloads. For Kubernetes, this means multiple- or many-workloads possible per server, a great many updates during an app/service's lifetime, networking being the primary means of software connectivity, and intense dynamism / constant flux of where apps/services live. Those were not expectations, design criteria, or common realities of the HA clusters or the software they run. In addition to the many abstractions provided by containers (e.g. Docker) vs. base operating systems, Kubernetes provides many abstractions and tools for "orchestrating" many apps/services concurrently and dynamically across large clusters of servers. E.g. Pods (groups of multiple containers operated together) and StatefulSets (for managing shared persistent state). HA clusters include some concepts/facilities that go beyond single servers (e.g. service definitions, connection topologies, heartbeats, failover policies). These could be considered ancestral forms of container and Kubernetes facilities. But platforms like Kubernetes that came after the Internet, scale-out, virtualization, cloud, and DevOps revolutions address massively greater scale and dynamism than any 1980s- or 1990s-born HA clusters ever would.

    If HA clusters were horse-drawn carts of the agrarian age, Kubernetes would be modern tractor-trailers running on interstate highways. Both enable "getting to market," albeit at very different levels of scale, with very different expectations and infrastructure.

    Finally, because Kubernetes focuses on scale and dynamism, many of its workloads are not thoroughly optimized for availability--at least not in the same "it must stay running, always and forever!" way that is the very point of HA clusters.