I can't speak for his use case. However, people with medical conditions like RSI, stroke or anything that limits their action between keyboard and mouse.
However, the average developer doesn't need those fine-grained navigation controls but can still benefit from enhanced input through voice. Some have mental disabilities who interface differently. Others are simply supplement their input as an average developer by voice as a preventative measure for repetitive strain RSI. The day the hope is develop something that every developer could see the value and leverage. In a way accessibility is for everyone.
In general I see accessibility as a hierarchy that could benefit everyone. Accessibility APIs, close to real time OCR, Eye tracking, alternative inputs (eg pedal, touch pad, stylus) allowing for the broadest possible input and APIs to extract information from applications. Extraction of information from applications and input to applications allows the user to specialize for their use case.
My experience as people will become experts in voice their command vernacular shortens as they carve out their niche use case. It goes beyond singular shortcuts too series of actions to get stuff done. However, what really means to happen is voice systems need access to the OS and to the application to really shine. That would empower not only navigation for those that are disabled but context-specific commands that are intuitive and abstracted like next function or parameter.
However, the average developer doesn't need those fine-grained navigation controls but can still benefit from enhanced input through voice. Some have mental disabilities who interface differently. Others are simply supplement their input as an average developer by voice as a preventative measure for repetitive strain RSI. The day the hope is develop something that every developer could see the value and leverage. In a way accessibility is for everyone.
In general I see accessibility as a hierarchy that could benefit everyone. Accessibility APIs, close to real time OCR, Eye tracking, alternative inputs (eg pedal, touch pad, stylus) allowing for the broadest possible input and APIs to extract information from applications. Extraction of information from applications and input to applications allows the user to specialize for their use case.
My experience as people will become experts in voice their command vernacular shortens as they carve out their niche use case. It goes beyond singular shortcuts too series of actions to get stuff done. However, what really means to happen is voice systems need access to the OS and to the application to really shine. That would empower not only navigation for those that are disabled but context-specific commands that are intuitive and abstracted like next function or parameter.