Kubernetes for the resistant: when not to use it
I have run Kubernetes in production for years. Here are the four signals that tell me a team should not be running it yet.
- Kubernetes
- Architecture
- Platform
- Pragmatism
I have run Kubernetes in production. I like Kubernetes. I have also seen ten teams adopt it for reasons that boiled down to “everyone else has it” and watched all ten regret it for the next eighteen months.
Kubernetes is excellent once your problem looks like the one Kubernetes was designed to solve. The trap is that the problem you have on day one almost never looks like that — but Kubernetes is so culturally dominant that adopting it feels safer than the alternatives. It usually is not.
Here are the four signals I look for.
Signal 1: you have one product, one database, and a deploy a week
If your engineering org runs one application, with one database behind it, and ships a release per week, the right deployment platform is a single VM with systemd, behind a reverse proxy. You will solve every problem you have for the next two years with git pull && systemctl restart, and the on-call person will sleep.
Kubernetes is for when you have N services on M nodes and you need to schedule them. If N is 1 and M is 1, the scheduler does no work, and you pay the operational cost (the YAML, the upgrades, the network plugin choices, the ingress controller, the cert-manager, the metrics stack) in exchange for nothing.
Signal 2: nobody on the team has run a stateful service in production
Kubernetes makes stateless workloads easy and stateful workloads deceptively easy. The tutorials show you a three-line Postgres StatefulSet that boots happily. The reality is that in three months you will have a node-disk-pressure event, and the difference between “we lose 30 seconds” and “we lose the database” comes down to whether you wrote the right podAntiAffinity, sized the PVCs correctly, configured the storage class for retention, and rehearsed the failover.
If nobody on the team has carried a database pager before, do not put your database in Kubernetes. Use a managed Postgres (RDS, Cloud SQL, Crunchy, Neon, whatever). Run your stateless apps in K8s if you must. Let the database be someone else’s problem until you have a senior engineer whose job is “the database in K8s”.
Signal 3: you cannot describe what your platform team owns
The unspoken assumption of Kubernetes is that someone will keep the cluster healthy. Upgrades every quarter. Plugin compatibility checks. Observability of the control plane itself. The ingress route that breaks when you upgrade the Helm chart. The CSI driver that has a known issue.
In a serious adoption, that “someone” is a platform team. It is one or two engineers, full-time. If you do not have those people and you cannot hire them, you are about to make those tasks the part-time job of someone whose actual job is shipping product. They will resent it, the cluster will degrade, and the next version of your platform decision will be “we are migrating off K8s”.
A managed control plane (EKS, GKE, AKS) reduces this cost a lot. It does not eliminate it. The day a node goes unschedulable, you will still own the diagnosis.
Signal 4: your bottleneck is not deploys; it is product
This is the one nobody wants to hear. Kubernetes optimises the velocity of deploying many services to many environments. If your team’s bottleneck is “we ship features slowly”, Kubernetes will not move the needle, because shipping features slowly is almost never a deploy problem. It is a product problem, or an organisation problem, or a tooling problem upstream of deploy.
Look honestly at your last six months. Where did the time go? If the answer is “discovery, design, code, review, QA”, then Kubernetes is not the answer. Boring deployment + obsessing over the slow part is the answer.
When does Kubernetes earn its place? When you have many services, multiple environments, real autoscaling needs, a platform team that owns it, and a culture that genuinely understands what the YAML says. At that point it is not “Kubernetes is heavy”; it is “Kubernetes does the thing we need”.
Until then: a VM, systemd, and a reverse proxy. They have shipped more reliable software than every Kubernetes cluster combined.
Was this useful?