If you're talking about a hosted k8s like EKS or a toy/single-node k8s, in 2021, then nothing: k8s is much better.
But if you're on-prem, and have a tonne of metal and just arrived in 2014 via a time machine, swarm was so much simpler and simpler that if you were a sysadmin who already had their own scripts -- their own jenkins-powered IC and git-hooks that built and deployed -- whatever it is you were building -- then swarm looked like a nice gradual extension of that, and k8s looked more like starting over and admitting defeat.
I mean k8s is basically an infra rewrite in any shop that was/is currently VM based. "Hey you know that fundamental unit of isolation you've built all your tooling around. Throw it all away."
Don't get me wrong, I would absolutely go with k8s on any greenfield project but there's a huge huge opportunity for someone to take everyone's VM orchestration tools and quietly and semi-transparently add k8s support for gradual migrations.
Personally, k8s is one of the easiest system I learned to use in the past few years. It has many components, too many that one can't practically understand all of them well. But every component follows the same design principle, so there's a clear path for a noob like me to get the full picture step by step.
The first thing I read on k8s was controller pattern[0]. Then everything becomes so easy to learn. Something wrong with a pod? Find its controller and start troubleshooting from there. Oh, the pod is controlled by a replicaset? Check replicaset controller. The replicaset is managed by a deployment? You know where to go.
That's why I said it is only complicated in some sense. I saw a lot of people started learning this system by going through a list of popular components and their concepts. I would've gotten lost easily and probably given up if I did the same in the beginning.
k8s is just a bunch of processes working together. The downside is that if something goes wrong, it is unclear which process is faulty and how to fix it. A misconfigured DNS may take down the whole cluster, and the symptoms are that every process fails because the network is out. Difficult to backtrack to the fault source, there are a lot of configs pertaining to the network that could be faulty.
On the plus side, if someone else configured the cluster correctly for you (e.g. a Cloud service like GKE), then it's a breeze to use.
As a guy who's been running stuff with docker-compose and is learning k8s, the learning curve for k8s is a cliff. Every single layer of the stack gets added complexity.
Look at the history: In the beginning Kubernetes didn't have deployments - only replicasets, it didn't have ingress, it lacked the cloud provider integration you have today...
It was barebones like Mesos but they managed to focus on the right things early on. Once these features and a bit of an ecosystem started to build, the question if we should stay on Mesos with a bunch of homegrown management tooling or move to Kubernetes wasn't even a decision anymore. There was still a significant amount of code which we later just deleted because it became a first class feature or the community implemented something much better.