Crates is especially challenging -- the build process is very expensive on all resources (cpu, ram, and disk), and packages are very hard (impossible?) to cache. It works super well on the Pro plan.
Have you tried cargo-binstall instead? They're pre-compiled crate binaries that fall back to cargo install, works very well. Also, you might want to look into the mold linker as well as sccache.
It’s communication meant to existing users and it’s a nuanced picture worth elaborating. But if you are not an active user it’s best if you just look at our pricing page for a complete picture: https://replit.com/pricing
I'm an occasional user, but the one thing holding me back is the almost complete lack of intellisense. I mostly work in JVM languages, and I get that JetBrains-level language awareness is hard, but if you're aiming for the professional market it's what you're up against. And it's really fraustrating, because you're solving a big problem! But JetBrains solves an even bigger one.
Yes, in fact we recently made the free plan more powerful with non-preemptible VMs, premium networking, and package caching for everyone. And working on an AI free tier coming soon. Will likely invest more in the free plan once the abuse from unlimited hosting go away.
Also — the title is slightly misleading. Static hosting is still free, and autoscale (with scale to zero) will effectively cost next to nothing for most users. Especially new programmers get very little traffic that according to our historic data it will cost $0.2 / month.
Makes it very much seem like you were only sorry you got caught — and were actually never sorry and didn’t learn from what should have been a teachable moment. Sad.
That's really interesting, indeed I can reproduce this by changing the comment. I also managed to get correct output for this sample by renaming the function.
Is it, though? The major selling point of coding LLMs is that you can use natural language to describe what you want. If minor changes to wording - the ones that would not make any difference with a human - can result in drastically worse results, that feels problematic for real-world scenarios.
Hi from the Codeium team. It's awesome to hear you are allowing other code LLMs to be used on the Replit platform (we're big fans)! We'd love to enable our free chrome extension on Replit.
would love to be able to compare codeium vs ghostwriter inside replit! (or toggle between them based on known strengths or preferences, perhaps by project or by filetype)
The model is not RLHF'd or instructed. It's an inline autocomplete model so it will get confused if you talk it like you're talking to a person. Altho it is possible to finetune it this way. To get better full function completion try giving it the function definition and a descriptive docstring as a prompt.
Tech talk here with timestamp: https://www.youtube.com/live/veShHxQYPzo?si=UlcU9j2kC-C4oWvj...