I'm not convinced on the UX of that. Developers are lazy. I think what would more than likely up happening is they explicitly set it to "auto" and modify the query to get a more performant plan. After a certain query complexity It would become exceedingly hard to start to be able to piece together plans by hand.
A bunch of performance problems aren't even solved by tweaking the query either. They're solved by changing the database structure. Adding the appropriate indexes and such. Making sure datatypes match between joined data and no implicit conversions are happening. Right-sizing columns.
No amount of specifying your own query plan will do anything to affect those kinds of issues.
In practice I find mostly it comes down to cardinality estimate issues as a very common source of problems as the database either over or under provisions enough memory for the query. If it estimates it's going to get back a lot more data than it actually does and it grants too much memory that will reduce parallelism because that memory can't be used by other queries. If it under-estimates it doesn't grant enough memory and when it gets back more rows that will fit into memory it has to write them temporarily to disk taking a massive hit in I/O performance.
How does your scheme work with figuring out how much memory the database should grant to a query when specifying a plan by hand?
What's more is plans change over time as statistics change. Leaving it up to the query optimizer means it's adaptive. Having to specify yourself means you have to know the optimizer isn't giving you the best plan at design time. There are some cases where you know that. There are some cases where you don't.
You can already specify query hints etc. I think SQL just has this covered already. I have no qualms if the query changes slightly.
A bunch of performance problems aren't even solved by tweaking the query either. They're solved by changing the database structure. Adding the appropriate indexes and such. Making sure datatypes match between joined data and no implicit conversions are happening. Right-sizing columns.
No amount of specifying your own query plan will do anything to affect those kinds of issues.
In practice I find mostly it comes down to cardinality estimate issues as a very common source of problems as the database either over or under provisions enough memory for the query. If it estimates it's going to get back a lot more data than it actually does and it grants too much memory that will reduce parallelism because that memory can't be used by other queries. If it under-estimates it doesn't grant enough memory and when it gets back more rows that will fit into memory it has to write them temporarily to disk taking a massive hit in I/O performance.
How does your scheme work with figuring out how much memory the database should grant to a query when specifying a plan by hand?
What's more is plans change over time as statistics change. Leaving it up to the query optimizer means it's adaptive. Having to specify yourself means you have to know the optimizer isn't giving you the best plan at design time. There are some cases where you know that. There are some cases where you don't.
You can already specify query hints etc. I think SQL just has this covered already. I have no qualms if the query changes slightly.