Then you just use "kubectl apply -f myapp.yml" to create or update.
Re ingress, I agree that it's probably the weakest point of Kubernetes at this point. It's particularly weak when it comes to internal load balancing. When you have lots of services that should only need to be available internally, you'll want to have an internal ingress; I don't know about AWS, but Google Cloud Platform's internal load balancer doesn't support pointing at Kubernetes [1]. I haven't found a better option than to run Traefik as a DaemonSet and rely on round-robin DNS (aka poor man's HA).
I really think the community ought to adopt this as best practice and actively discourage (or even deprecate) the List type. It's just too much noise and zero benefit IMO over raw slices or serialization as arrays (or document lists for YAML).
As the author (blame me) of the List type, the primary advantage is it needs no special logic in JSON for processing. New line separated JSON is wierd for a lot of libraries, and in the future we want to have endpoints that allow bulk creation / bulk apply.
Helm "charts" are templates that generate Kubernetes manifests. To install a chart, you provide values (for which there are defaults). For example, here [2] is the chart for PostgreSQL. The default values are in values.yaml, and the templates for each manifest are in the templates folder.
Conceptually, this even cleaner than Docker Compose, because there's zero data that isn't specific to your install: Everything else defaults to a pre-defined default.
So in the same way that the official PostgreSQL Docker image is general-purpose, you can define a completely general-purpose chart that can be used by anyone. You customize it by providing overrides.
The only downside is that you still end up with Kubernetes manifests. It's not an abstraction, it's an automation tool.
You don't need to split your manifests into multiple files. You can use a list:
Edit: Or YAML's "multiple document" syntax: Then you just use "kubectl apply -f myapp.yml" to create or update.Re ingress, I agree that it's probably the weakest point of Kubernetes at this point. It's particularly weak when it comes to internal load balancing. When you have lots of services that should only need to be available internally, you'll want to have an internal ingress; I don't know about AWS, but Google Cloud Platform's internal load balancer doesn't support pointing at Kubernetes [1]. I haven't found a better option than to run Traefik as a DaemonSet and rely on round-robin DNS (aka poor man's HA).
[1] https://github.com/kubernetes/ingress/issues/112