Hacker Newsnew | past | comments | ask | show | jobs | submit | danielmartins's commentslogin

Looks awesome, amazing work!

Any particular/strong reason for choosing Zig for this?



Thanks!


Not as generally available as I thought, and for the looks of it, feels just as "hacky" as the preview with respect to the user experience. For some reason, I was expecting more from them.


> This is not true, if you run Debian / CentOS7 / Ubuntu, out of the box the settings are good. The thing you don't want to do is start to modify the network stack by reading random blogs.

I agree these are good defaults, but they are not meant to work well for all kinds of workloads. And yes, if things are working for you they way they are, that's okay; there's no need to change anything.

On the other hand, I personally don't know anyone who runs production servers of any kind on top of unmodified Linux distros.


> On the other hand, I personally don't know anyone who runs production servers of any kind on top of unmodified Linux distros.

You are so, so so lucky... lol. I say that as someone who has come across a desktop CentOS install on a server on multiple occasions, complete with running x-org and like 3-4 desktop environments to choose from, along with ALL of the extras. KDE office apps, Gnome's office apps, etc... HORRIBLE.


Sounds interesting! Do you have urls for more information about this? Would love to read good posts about that! My production servers have been running with standard parameters at every company so far. I feel I might be missing out!


No. By default, the NGINX ingress controller routes traffic directly to pod IPs (the Service endpoints):

https://github.com/kubernetes/ingress/tree/master/controller...


Thank you. So there is a DNAT to get to the Ingress Controller but from there at least it's direct routing to the service endpoint(s)? Does that mean the Virtual IP given to the Service is basically bypassed when using Ingress Controller?

TLS termination at the Ingress Controller and by default unencrypted from there to the service endpoint?

I found this useful: http://blog.wercker.com/troubleshooting-ingress-kubernetes

Interesting discussion here: https://github.com/kubernetes/ingress/issues/257

It seems like a lot of overhead before even starting to process a request!


> TLS termination at the Ingress Controller and by default unencrypted from there to the service endpoint?

We are doing TLS termination at the ELB (we're running on AWS).

> Interesting discussion here: https://github.com/kubernetes/ingress/issues/257

Great, thanks!

Regarding ways of updating of the NGINX upstreams without requiring a reload, I was just made aware of modules like ngx_dynamic_upstream[1]. I'm sure there are other ways to address this in a less disruptive way than reloading everything, so this is probably something that could be improved in the future.

[1] https://github.com/cubicdaiya/ngx_dynamic_upstream


May I ask how you are automating the ELB/TLS configuration and how that ties into the Ingress controller? Do you somehow specify which ELB it should use? We're in a similar situation.


You can annotate any Service of type LoadBalancer in order to configure various aspects[1] of the associated ELB, including which ACM-managed certificate you want to attach to each listener port.

[1] https://github.com/kubernetes/kubernetes/blob/master/pkg/clo...


Thanks a lot, this will save us quite some time.


> Why would any sane Op/Inf/SRE choose not to have at least account-level isolation - is it only a matter of cost due to under-utilization?

In our particular case, yes, pretty much. We are a small company with a small development team, so even if I would want to split accounts to different teams, we would end up having one account for 2-3 users, which doesn't make a lot of sense now.


> This is a great read. I know the single cluster for all env is something that is sort of popular but it's always made me uncomfortable for the reasons stated in the article but also for handling kube upgrades. I'd like to give upgrades a swing on a staging server ahead of time rather than go straight to prod or building out a cluster to test an upgrade on.

I've been doing patch-level upgrades in-place since the beginning, and never had a problem. For more sensitive upgrades, this is what I do: create a new cluster using based on the current state in order to test the upgrade in a safe environment before applying it to production.

And for even more risky upgrades, I go blue/green-like by creating a new cluster with the same stuff running in it, and gradually shifting traffic to the new cluster.


> Could you share which version of NGINX you found the issue with the reloads? Which version the fix was released?

I'm using 0.9.0-beta.13. I first reported this issue in a NGINX ingress PR[1], so the last couple of releases are not suffering from the bug I reported in the blog post.

> I find it interesting/brave that you use a single cluster for several environments.

I'm not working for a big corporation, so dev/staging/prod "environments" are just three deployment pipelines to the same infrastructure.

As of now, things are running smoothly as they are, but I might as well use different clusters for each environment in the future.

[1] https://github.com/kubernetes/ingress/pull/1088


> OP didn't mention what Linux distro he's using and what are all of the OS-level configs he changed in the end of the day.

I'm using Container Linux, and yes, I did a few modifications, but I intentionally left them out of the blog post as someone would be tempted to use them as-is.

I'll share more details in that regard if more people seem interested.


I'd be interested to hear more.


Nice article.

You said you are using this toy cluster to play around with monitoring as well, so could you share more details in that area, for instance:

- How much resources does Kubernetes components take from your rpi boards?

- Did you have to do any tweaking in order to make everything run smoothly?


Thanks for asking. I am planning to write another blog post dedicated fully to the setup of the cluster itself. For now, I will answer here your questions:

- Since the Pi boards are not so powerful (4 cores/1Gb RAM) just the monitoring takes most of the resources. But still I can deploy small Golang/Python apps. Currently I have 3 OrangePi Boards and 1 RaspberryPi as master node. I have still about half of the memory available in each node so ~450Mi free RAM. On the CPU side, only the master node is constantly using more than half of the available CPU cores.

- I actually run into problems due to the amount of logs done by the Kubernetes components. The partition dedicated to log files was constantly getting full. Then after proper configuration of logrotate it started to be health. Another interesting problem I had was Orphaned Pods. I still don't know the reason for that. To fix this, I had to add some `rm` commands cleaning the directories of old Orphan Pods.


It' not open source, but I'll try to sell the idea to our CTO. :)

Just to give you more details about its inner workings, this function is written in JavaScript that gets called when certain events come from GitHub ('pull_request', 'status', and 'push'), and uses kubectl to modify the corresponding deployments depending on the event.

Nothing fancy there, trust me.


Do you only create copies of the stateless pieces of each stack, or do you also copy databases?


We currently only run stateless apps on Kubernetes. All databases are hosted elsewhere (RDS, MongoDB Cloud, etc)


Let me rephrase that: Do you spin up a new copy of any necessary data stores when you deploy a topic branch of an app, or do all versions of the app share the same view of data in their environment (e.g. staging/production)?


No, these 'development' environments point to other services in 'staging'.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: