Hacker News new | past | comments | ask | show | jobs | submit login

Creative professionals rely on physical keyboards and buttons on the mouse for actions and only use spatial input for things that are spatially relevant (pan, placement, zoom, rotation, selecting surfaces, edges, vertices, etc).

That used to be true. Autodesk put a lot of effort into interfaces for engineering in 3D. In Inventor, you only need the keyboard to enter numbers or names. They managed to do it all with the mouse. Try Fusion 360 to see this; there's a free demo.




You mean the right-click radial context menus? Yes, those are really nice. They require much less of a feedback loop and often don't require any visual processing at all. They are also more gesture based not buttons. The affordance is very generous compared to buttons and icons.

I use a 3DConnexion Space Navigator (6 DOF) in my left hand and mouse in my right for selection when using Fusion 360 and often use the gestures on the mouse.

I guess that brings up an exception. The context switching cost. Moving from pointer to keyboard is very slow so gestures really help out in that regard. If my hand is already on the keyboard then I have less reason to want to use the gestures.


That, plus the ability to rotate, pan and zoom while you're in the middle of a selection. That's a huge win. In 3D work, you often need to select 2 or more things, and those things may be small and need precise selection. Precision multiple selection is hard.

Before this was worked out, most 3D programs offered four panes, with three axial projections, usually wireframe, plus a solid perspective view. Just so you could select. Now we only need one big 3D pane.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: