Tried this for my own homelab, either I misconfigured it or it consumes x2(linearly) memory(working) of the stored data. So, for example, if I put 1GB of data, seaweed would immediately consume 2GB of memory constantly!
That is odd. It likely has something to do with the index caching and how many replication volumes you configured. By default it indexes all file metadata in RAM (I think) but that wouldn't justify that type of memory usage. I've always used mostly default configurations in Docker Swarm, similar to this:
depending on what you need it for nextcloud has WebDAV (clients can interact with it, and windows can mount your home folder directly, i just tried it out a couple days ago.) I've never used webdav before so i'm unsure of what other use cases there are, but the nextcloud implementation (whatever it may be) was friction-free - everything just worked.
https://github.com/seaweedfs/seaweedfs