The difference is that SSPL is a license that was written in bad faith with the explicit intent that people can't comply with the parts about running a hosted service.
See section 13, "Offering the Program as a Service". To comply with that you need to release all software used "to make the Program or modified version available as a service" under the SSPL.
For something like redis this includes: redis itself, the os you are using to host redis, the drivers and firmware for the hardware you are using to host redis, and more. Also your whole deployment stack up to and including the mouse drivers you use to click on the "deploy" button.
It is an absurd condition that effectively makes section 13 say "you can't offer a hosted version" and the OSI and FSF are right to reject that fig leaf of "it is like the agpl, but more".
It sort of can but all non-adobe software I know of, even commercial stuff like Affinity Photo, has spotty support for some PSD features.
Basically any given PSD will certainly load correctly in photoshop, but you're rolling the dice if you want to load it into anything else. More so if you are using more modern features.
I've made a tiny ~1m parameter model that can generate random Magic the Gathering cards that is largely based on Karpathy's nanogpt with a few more features added on top.
I don't have a pre-trained model to share but you can make one yourself from the git repo, assuming you have an apple silicon mac.
In general you can just use the parameter count to figure that out.
70B model at 8 bits per parameter would mean 70GB, 4 bits is 35GB, etc. But that is just for the raw weights, you also need some ram to store the data that is passing through the model and the OS eats up some, so add about a 10-15% buffer on top of that to make sure you're good.
Also the quality falls off pretty quick once you start quantizing below 4-bit so be careful with that, but at 3-bit a 70B model should run fine on 32GB of ram.
If you look in the `config.json`[1] it shows `Zamba2ForCausalLM`. You can use a version of the transformers library to do inference that supports that.
The model card states that you have to use their fork of transformers.[2]
dev of https://recurse.chat/ here, thanks for mentioning! rn we are focusing on features like shortcuts/floating window, but will look into support this in some time. to add to the llama.cpp support discussion, it's also worth noting that llama.cpp does not yet support gpu for mamba models https://github.com/ggerganov/llama.cpp/issues/6758
I'm no trademark lawyer, but isn't offering "WordPress" hosting fine as long as you are genuinely using the WordPress software? As I understand it that is purely nominative use.
I see that sentiment largely coming from developers who, I think, misunderstand the freedom that the GPL is protecting.
The GPL focuses on the user's freedom to modify any software they are using to better suit their own needs, and it does a great job of it.
The people saying that it is less free than bsd/mit/apache are looking at it from a developer's perspective. The GPL does deliberately limit a developer's freedom to include GPL code in a proprietary product, because that would restrict the user's freedom to modify the code.
reply