Hacker News new | past | comments | ask | show | jobs | submit login

+1 this. Whisper works insanely well. I've been using the medium model as it has yet to mis transcribe anything noticeable, and it's very lightweight. I even converted it to a coreML model so it runs accelerated on apple silicon. It doesn't run *that* much faster than before.. but it ran really fast to begin with. For anyone tinkering, ive had much success with whisper.cpp.





What was the process of converting it like? I assume you then had to write all of the inference code as well?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: