that to me looks like a error in whatever logic is behind the positional error code. You'd think they would have transformer models based on different layouts but maybe some weighting issues going on.. ie I would have thought its a model that is altering based on likelihood weights and maybe something up with that..
I've used it, lots of false positives out of the box, you need to do a ton of tuning or put a transformer/BERT model with it, but then at that point it's basically the same thing as the OP's project.
can i have this between my machine and git please.. Like its twice now I've commmited .env* and totally passed me by (usually because its to a private repo..) then later on we/someone clears down the files.. and forgets to rewrite git history before pushing live.. it should never have got there in the first place.. (I wish github did a scan before making a repo public..)
GitHub does warn you when you have API keys in your repo. Alternatively, there are CLI tools such as TruffleHog you can put in pre-commit hooks to run before commits automatically
Already mentioned it in another reply, but .env and passing secrets as environment variables are a tragedy. Take a look at how SecureStore stores secrets encrypted at rest, and you’re even advised to commit them to git!
Meta cheated with the mms models. That is they didn’t use a phonemeizsr step. This means they just won’t work or sound very strange. ASR data is usually not quite right for tts. But anyhow - not really answering your question but many of these languages already done in mms. Try them https://huggingface.co/spaces/willwade/sherpa-onnx-tts
We built this for our use case (we create solutions to help people speak who have a disability). This is a prediction model you can run in node or the browser. Next word, next character, word completion.. PPM is old - but still rocks
I’m interested to see how this compares with other heroku clones. The compassion stuff is interesting. I’m using apps on digitalocean. Can we get a comparaison of using app with droplet+blossom?
I’m with the other person too. Drop the emojis and your confidence goes up. We all know coding agents JUST LOVE filling up a document with emojis. It makes you wonder if it’s imagined the benefits too
Interesting, I would love to hear how well those worked.
Grammit uses the Prompt API for the local LLM, which currently uses a version of the Gemma 3n model on Chrome.
Grammit uses prompting instead of fine-tuning or custom training. Simplified, it has a system prompt along the lines of: "Rewrite this text, correcting any grammar or spelling mistakes," combined with a prefilled conversation containing a number of examples showing an input sentence and the corrected output.