I deployed a ML solution in Rust back in 2020 (tch-rs instead of tensorflow, though), so this has been possible for a while.
Surprisingly I saw some impressive speed gains vs python (and vs JS, tried that one as well), probably because I was running tons on images through it and the percentage of time spent doing inference wasn't very high.
Surprisingly I saw some impressive speed gains vs python (and vs JS, tried that one as well), probably because I was running tons on images through it and the percentage of time spent doing inference wasn't very high.
Deployment wasn't the easiest, but doable.
Instead of jumping on the tensorflow or torch boat, when doing Rust today I would consider using https://github.com/huggingface/candle