I’m slightly confuse as to how all this works. Do the GPUs just sit there with the models on them when the models are not in use?
I guess I’d assumed this sort of thing would be allocated dynamically. Of course, there’s a benefit to minimizing the number of times you load a model. But surely if a GPU+model is idle for more than a couple minutes it could be freed?
(I’m not an AI guy, though—actually I’m used to asking SLURM for new nodes with every run I do!)
Let's say, then, that it's not so much "dumb and wasteful" as "energy inefficient". In fact, this can be quite wise in a modern world full of surveillance-as-a-business and "us-east-1 disasters"
Can you elaborate the last statement? Don't quite understand why loading local LLM to GPU RAM, using it for the job and then "ejecting" is "dumb and wasteful" idea?
Because as a function of hardware and electricity costs, a “cloud” GPU will be many times more efficient per output token. You aren’t loading/offloading models and don’t have any parts of the GPU waiting for input. Everything is fully saturated always.
I believe GP means it still to be connected to 'if this kind of latency is unacceptable to you' - i.e. you can't load/use/unload, you have to keep it in RAM all the time.
In that case it's massively increasing your memory requirement not just to the peak the model needs, but to + whatever the other biggest use might be that'll be inherently concurrent with it.
> This (along with batching) is why large local models are a dumb and wasteful idea if you're not serving them at enterprise scale.
Local models are never a dumb idea. The only time it's dumb to use them in an enterprise is if the infra is Mac Studio with M3 Ultra because pp time is terrible.
Models take a lot of VRAM which is tightly coupled to the GPU so yeah, it's basically sitting there with the model waiting for use. I'm sure they probably do idle out but a few minutes of idle time is a lot of waste--possibly the full 82% mentioned. In this case they optimized by letting the GPUs load multiple models and sharing the load out by token.
They definitely won't idle out- if they idle out, it'll take on the order of up to 60 seconds to load the model back into VRAM, depending on the model.
That's an eternity for a request. I highly doubt they will timeout any model they serve.
> That's an eternity for a request. I highly doubt they will timeout any model they serve.
That's what easing functions are for.
Let's say 10 GPUs are in use. You keep another 3 with the model loaded. If demand increases slowly you slowly increase your headroom. If demand increases rapidly, you also increase rapidly.
The correct way to do this is more complicated and you should model based on your usage history, but if you have sufficient headroom then very few should be left idle. Remember that these models do requests in batches.
If they don't timeout models, they're throwing money down the drain. Though that wouldn't be uncommon.
That's only if you're expecting 10 GPUs in use. They're dealing with ~1 GPU in use for a model, just sitting there. Alibaba has a very long tail of old models that barely anyone uses anymore, and yet they still serve.
Here's a quote from the paper above:
> Given a list of M models to be served, our goal is to minimize the number of GPU instances N required to meet the SLOs for all models through auto-scaling, thus maximizing resource usage. The strawman strategy, i.e., no auto-scaling at all, reserves at least one dedicated instance for each model, leading to N = O(M)
For example, Qwen2 72b is rarely used these days. And yet it will take up 2 of their H20 gpus (with 96GB VRAM) to serve, at the bare minimum, assuming that they don't quantize the BF16 down to FP8 (and I don't think they would, although other providers probably would). And then there's other older models, like the Qwen 2.5, Qwen 2, Qwen 1.5, and Qwen 1 series models. They all take up VRAM if the endpoint is active!
Alibaba cannot easily just timeout these models from VRAM, even if they only get 1 request per hour.
That's the issue. Their backlog of models take up a large amount of VRAM, and yet get ZERO compute most of the time! You can easily use an easing function to scale up from 2 gpus to 200 gpus, but you cannot ever timeout the last 2 gpus that's serving the model.
If you read the paper linked above, it's actually quite interesting how Alibaba goes and solves this problem.
Meanwhile on the other hand, Deepseek solves the issue by just saying "fuck you, we're serving only our latest model and you can deal with it". They're pretty pragmatic about it at least.
If I had to handle this problem, I'd do some kind of "split on existing loaded GPUs" for new sessions, and then when some cap is hit, spool an additional GPU in the background and the transfer the new session to that GPU as soon as the model is loaded.
I'd have to play with the configuration and load calcs, but I'm sure there's a low param, neat solution to the request/service problem.
> I guess I’d assumed this sort of thing would be allocated dynamically
At the scale of a hyperscaler I think Alibaba is the one that would be doing that. AWS, Azure and I assume Alibaba do lease/rent data centers, but someone has to own the servers / GPU racks. I know there are specialized companies like nscale (and more further down the chain) in the mix, but I always assumed they only lease out fixed capacity.
The paper is about techniques to do that dynamic allocation to maximize utilization without incurring unacceptable latencies. If you let a GPU sit idle for several minutes after serving a single request, you're setting money on fire. So they reuse it for a different model as soon as possible, starting even before the first request is finished, because: If you don't have a dedicated GPU for a model, are you going to wait for a multi-gigabyte transfer before each request? So they have a dedicated GPU (or two, one for prefill, one for decode) for a group of models that are processed in an interleaved fashion, scheduled such that they stay within the latency budget.
>Do the GPUs just sit there with the models on them when the models are not in use
I've assumed that as well. It makes sense to me since loading up a model locally takes a while. I wonder if there is some sort of better way I'm not in the know about. That or too GPU poor to know about.
I guess I’d assumed this sort of thing would be allocated dynamically. Of course, there’s a benefit to minimizing the number of times you load a model. But surely if a GPU+model is idle for more than a couple minutes it could be freed?
(I’m not an AI guy, though—actually I’m used to asking SLURM for new nodes with every run I do!)