Kubernetes is a complex beast. Once you get the hang of some basic features, it can be a nice way of deploying and running software. I've used GKE some years ago and more recently EKS. Despite being "managed" offerings, I found the amount of ad-hoc glue required to keep a cluster going to be just too much.
In principle, Kubernetes is great. I stick to a very productive yet not too complex set of core features (pods / deployments / services). I only deploy stateless services on it – don't like having too many moving parts.
A managed service should let me focus on running my workload. The initial provisioning of a cluster is usually straightforward. It's keeping it alive, getting auto-scaling to work, updating Kubernetes versions that make it a hassle.
I don't want to think about the underlying compute instances. Is the cluster made up of 5 or 12 nodes? Don't really care, as long as it's running and serving traffic.
Ideally, I'd just set a budget and some constraints in terms of instance types and AZ. The vendor takes care of provisioning, auto-scaling and updates.
Stateful workloads may make this more difficult. I'd be very cautious of running a production database on Kubernetes. I rely on RDS, ElastiCache etc. for storage. It seems a common enough pattern.
Maybe there is a stateless subset of Kubernetes that can be offered as a real managed product? Set a budget, get a
kubeconfig file and never have to worry about the cluster again. That's when Kubernetes will make sense for me.
Perhaps AWS Fargate is what I'm looking for. Perhaps Azure, DigitalOcean or Linode have implemented something similar. The vendor that figures this out will finally give me a comfortable way of running Kubernetes. Until then, it'll be hard to justify the operational burden.
You can follow me on Twitter @_alpacaaa.