GitHub copilot already does have speech to text and as my sibling comment mentions, on the Mac, it is globally available. It varies according to typing and speaking speed but speaking should be about five times faster than typing.
On a mac you can just use a hotkey to talk to an agentic CLI. It needs to be a bit more polished still IMO, like removing the hotkey requirement, with a voice command to break the agents current task.
I believe it does on newer macs (m4 has neural engine). It's not perfect, but I'm using it without issue. I suspect it'll get better each generation as Apple leans more into their AI offering.
There are also third parties like Wispr that I haven't tried, but might do a better job? No idea.
The mac one is pretty limited. I paid for a similar tool as above and the LLM backing makes the output so much better. All my industry specific jargon gets captured perfectly whereas the Apple dictation just made up nonsense.
I'm convinced I spend more time typing and end up typing more letters and words when AI coding than when not.
My hands are hurting me more from the extra typing I have to do now lol.
I'm actually annoyed they haven't integrated their voice to text models inside their coding agents yet.