Hacker News new | past | comments | ask | show | jobs | submit login

M4 max 128gb ram.

LM studio MLX with full 128k context.

It works well but has a long 1 minute initial prompt processing time.

I wouldn’t buy a laptop for this, I would wait for the new AMD 32gb gpu coming out.

If you want a laptop I even consider my m4 max too slow to use more than just here or there.

It melts if you run this and battery goes down asap. Have to use it docked for full speed really






Do you also have tokens per second metric ?

Yep I have an M4 Max Studio with 128GB of RAM, even the Q8 GGUF fits in memory with 131k context. Memory pressure at 45% lol

How many tokens per second are you both getting?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: