Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Tailscale Universal Docker Mod (tailscale.dev)
245 points by notamy on Oct 8, 2023 | hide | past | favorite | 60 comments


For those who want to run Tailscale on their Docker containers, but don't want to switch to images based off linuxserver.io, you can still run Tailscale as a sidecar container, and use "network_mode: service:tailscale"

I do that for my containers and it is incredibly useful for cross containers communication, especially for containers that are hosted in different dedicated servers.

https://mrpowergamerbr.com/us/blog/2023-03-20-untangling-you...


I run my game servers using `network_mode: service:tailscale` and every time the game server needs to restart (or crash) Tailscale will permanently lose connectivity and needs to be recreated (restart doesn't work).

To solve this problem I add another container which should never need to be restarted, and both the game and Tailscale use the networking of that container. This is also the exact use case of Kubernetes' pause containers, so I just use the EKS pause image from ECR public gallery.

Another tip I'd recommend is to run the Tailscale container with `TS_USERSPACE: 'false'` `TS_DEBUG_FIREWALL_MODE: nftables` (since autodetection fails on my machine) and give it `CAP_NET_ADMIN`. This allow Tailscale to use tun device instead of emulation, and it supposed to be more performant. But the clear benefit is that the game server will see everyone's Tailnet IP instead of 127.0.0.1.

In Thai: https://blog.whs.in.th/node/3676


I may be wrong about the "TS_USERSPACE" environment variable, but I think that you don't need to disable it.

If you were using userspace networking, you wouldn't be able to connect to other services in your tailnet without setting up a HTTP/SOCKS5 proxy https://tailscale.com/kb/1112/userspace-networking/


It'll work but my Minecraft server sees everyone as 127.0.0.1. After disabling TS_USERSPACE I see each person's Tailnet IP. Tailscale doesn't provide this information anywhere (since their node name is private), so once I have their IP address I can also use `tailscale ping` to ping the IP and see whether the connection is going through relay or direct, which is helpful when debugging their latency.

My users report better latency, but I doubt it.


This seems a lot cleaner than injecting new binaries into existing images or depending on linuxserver.io images.


The "benefit" of one tailscale daemon per container is https/external access/etc can be handled automatically


Does linuxserver.io have a bad reputation? or is it just that it's yet another dependency in the stack?

Asking because I've been happy with their containers so far


Many useful images are based on linuxserver.io containers, but most docker images are not based on linuxserver.io.


I use quite a few linuxserver.io containers on my home stack on my Synology NAS, and they've been awesome. When I see that they're from there, I know they'll be reliable and the setup process is going to be straightforward and similar to other containers I've already used.

Ironically, the 1 container I really wanted to use this Tailscale mod for was not from linuxserver.


I find it's a bit annoying that almost all their images assume root access by default. Their init script does a bunch of things as root and only switch to a non-root user at the very last step before starting the main process if some magic environment variable is discovered. If your infra does not allow root users in containers you can't use their images.

It's also too much magic for my liking. Some software distributed as a single executable binary gets packaged in some over complicated base image on top of another base image, when I can technically just copy the binary into a scratch and call it a day. I understand the benefits when they have to manage tons of images at scale, but my life has been much easier with images packaged by myself or the upstream projects.


I am really impressed by what the tailscale folks have been building. I use their product suite regularly and have nothing but good things to say about it. I will be tinkering with this mod as well starting next week ;)

keep it up guys!


While this is super cool, it's not Universal. It requires usage of containers based upon LinuxServer.io containers.


How does the base image make them not universal?


Not every container is based on the LinuxServer.io stack. I can't take any arbitrary container and use the docker mod and have it work.

I have over 25 containers running on my home server and not a single one of them is based on a LinuxServer.io image. This "universal" mod would work with 0 of them.


As others have said, you can run a sidecar container and proxy your current containers through the sidecar, and into the Tailscale network. They are universal in that the docker containers can run on any docker host, not that they are guaranteed to mesh/drop in and run inside whatever random containers you already run. Not sure why I am being downvoted for asking a genuine question out of curiosity…


Would you list them? I’m always looking for cool new containers for my homelab


Wireguard + GUI: https://github.com/wg-easy/wg-easy

Managing all those household docs: https://docs.paperless-ngx.com

Backups of mail accounts: https://www.offlineimap.org

Cloud storage for phones: http://nextcloud.com

Mirroring podcasts locally: https://github.com/akhilrex/podgrab

Managing dynamic service dns via plugins: https://coredns.io

My own matrix instance: https://matrix-org.github.io/dendrite/

Backups: https://restic.net

Media Management: https://jellyfin.org

Relay only tor help: https://www.torproject.org

S3 compatible storage: https://github.com/seaweedfs/seaweedfs

Git + CI: https://about.gitlab.com

Managing SSL and container proxying: https://traefik.io

Mirror the docker registry locally: https://github.com/docker-library/docs/tree/master/registry

Samba support for the windows hosts: https://github.com/ServerContainers/samba

HTTP/S Proxy with support for modifying results: http://www.privoxy.org

Database: https://www.postgresql.org

Datastore: https://redis.io

and a bunch of support software. Paperless has Tika and Gotenberg as deps for example.


Do you find GitLab and its Runners to be heavy at all in your home lab? I'm curious if anyone's been using Gitea/Forgejo with Actions or Woodpecker (Drone).


I have the runner set to poll every 60 seconds, so it's not using any real resources unless it's running a build. Gitlab itself is the heaviest consumer of resources, however, having the integrated CI system is worth it for me. I use gitlab at work, so it's familiar.


> Cloud storage for phones: http://nextcloud.com

Thanks, that sums it up for me.

I used OC/NC for years but in the last three I mostly abandoned it because the desktop app (for Windows, at least) is atrocious and Android one... isn't good either.

But as on-demand document download with occasional upload it's fine.


do you find any blocklisting issues running a relay only? i heard in the long, long ago that people who even just ran relays would find themselves on IP blocklists because of ignorant blocklist builders not knowing the difference between exit and non-exit nodes or grabbing the wrong lists


I have not and I’ve run the relay for about 5 years now. I do get lots of captchas on cloudflare, but generally they’re just the click to prove you are a human style and it’s not too bad for the household.


Awesome! Thank you!


LinuxServer.io containers rock for homelab. Not sure why you wouldn't use them.


It's called a "universal" mod because it can work on any linuxserver.io container image.


I would disagree that containers aren't supposed to run more than one process. It's just discouraged because a lot of people aren't well versed in the pitfalls of being PID 1. Fedora's toolbox is a great counter-example, as is systemd now being able to boot up as your PID 1 in some container distros without much modification.


To be fair, even for running a single process the pitfalls are real. I've been seeing Tini[1] a lot for these situations.

I just read in the README that Tini is included by Docker since 1.13 if using --init flag.

[1] https://github.com/krallin/tini


Author of the post here in case you have any questions!


No question, just a thanks. I've read a lot of your stuff, and it's always incredibly insightful and clever. Thanks for being an amazing Internet citizen!


Just another boring "thanks for your fantastic blog posts!" post. Love the personalities in them too :-)


you rock


I wonder if this will fix the issue of appending -n to new ephemeral servers that join the network. For example, if you have a service wiki-1 and that container/instance gets restarted, it then appears on your tailnet as wiki-1 making users unable to access it at wiki/

Their official solution is to run a logout command before shutting down but that's not always possible.


This is where mounting a state volume helps. If you mount a state volume, you don't need to make the containers ephemeral unless you expect them to move between hosts frequently. If that is the case, I'd love to hear more about your use case so that I can suggest a better alternative.


That is helpful to know for Docker containers, thank you.

The use-case where we find the renaming most frustrating is typically when we start a cloud instance with a Tailscale setup script in the cloud init (via Terraform). If we, say, change a parameter that requires Terraform to restart that instance, then the freshly-started instance will be given a `-1` name by Tailscale and the old instance will be offline.

I wish there would simply be a --force-hostname option or something of that nature that tells Tailscale "if a host is authenticating with this name, give it that name, any older machines using that name should be kicked off"


This is really cool, I didn’t even know Docker mods existed. That’s the best kind of cool.

I wonder if the internals will be open sourced? I assume it’s a pretty “simple” go tcp proxy that listens on the tailnet instead of an open port. I had been thinking about writing one for our services at work, so maybe we can use this, but I’d prefer to build the binary directly into our containers.


It's likely just `tailscale serve https / <upstream>`.

https://github.com/tailscale/tailscale/blob/main/ipn/serve.g...

And they also support direct embedding:

https://tailscale.dev/blog/embedded-funnel

I think this is built on the wireguard-go + gvisor mashup, that allows you to do this with just Wireguard:

https://github.com/WireGuard/wireguard-go/tree/master/tun/ne...

One of my favorite applications of this is this little tool that turns Wireguard VPNs into SOCKS5 proxies (which you can selectively enable in your browser)

https://github.com/octeep/wireproxy


This is really cool. Networking in general is full of quirks and what people think is full of "magic".

Full disclosure, I am founder of Adaptive [1]. We use a similar technique to the one with VPN exposed as SOCK5 proxy but for accessing internal infrastructure resources.

[1] https://adaptive.live/


I think we had the same idea, but I didn't get to finish building mine. OIDC tokens being available in most CI systems these days is a nice building block.

https://github.com/acuteaura/tinybastion/


More than happy to chat, if you drop a email at debarshi [dot] adaptive [dot] live.


Docker doesn't do mods. As the article says, this is possible due to s6 and s6-overlay, which is included with linuxserver.io docker images, combined with a set up scripts that set it all up. This does prevent your containers from being immutable.

All the code for LSIO images is available on their GitHub.


You're in luck: it's literally normal tailscale

https://github.com/tailscale-dev/docker-mod


It looks like I need to regenerate the auth key every 90 days, which kind of kills this for me. I definitely don't want to have to update all my docker stuff every 90 days, and it's almost assuredly going to go offline right when I can't deal with it.


The trick is to persist the tailscale var volume. The auth key is only used when setting up a particular client the first time, once it's connected to your network the auth key is irrelevant.

If you're doing this with ephemeral containers then yes you'll need a way to roll auth keys. OAuth credentials don't expire and Tailscale has a command line single purpose tool to get an auth key given OAuth credentials, so that can be a viable alternative.

https://tailscale.com/kb/1215/oauth-clients/#get-authkey-uti...


Oh, that makes a huge difference, then. I had wondered why anything needed to be persistent.

Thanks!


This post is great as the current state of network mesh is too complex for some users. That led me to write a simple rust daemon to run a TLS proxy and spawn the original app locally, reverse proxying requests as the cost of implementing a full mesh just to have tls across applications was too much for my team at the time. I didn't knew about ONRUN, s6 and all that. Also, why not tailscale as the mesh ?


There is no ONRUN: "you can think of docker mods as a missing ONRUN hook"


I just love that this blog post includes an AI-generated image with the caption of course being the name of the model and the given prompt.


I always hate when the images don't have the prompt shown. I'm glad you appreciate me adding it there!


I have never been sure what the security implications are but I just set ports to the tailscale address, and everything is accessible.

So if the local tailscale address is 1.2.3.4, I do:

ports:

- 1.2.3.4:8080:8080

This doesn't actually add applications to the tailnet as in the OP, but it works.


Yeah, the main advantage of giving your containers their own IP addresses is the ability to use Tailscale as a service discovery mesh. If you combine this with MagicDNS, this gets you most of an 80:20 of Istio with about 10% of the configuration required!


All we need now is something for kubernetes


https://tailscale.com/kb/1236/kubernetes-operator

It's actually even easier to use. Add `tailscale.com/expose: "true"` to a kubernetes service annotations and it will be added to the tailnet automatically


What I think is the really cool part about this is that tailscaled is able to store its state in a Kubernetes secret on the fly so that it can dynamically update itself and handle being restarted on a new runner node. This isn't the same as true multiple nodes with the same IP, but when combined with automatic restarts it gets way closer to that in practice than it has any right to.


Wow, looks super comfy. Tailscale really seems to be doing everything right lately.


Is there any reason it wouldn't work with podman?


I don't think so. Though I am looking into a CNI plugin to make things more seamless.


This is really cool.


Article is six months old


And yet some of us are just finding out about this feature now.


Most of us*




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: